Jan 28 09:42:25 crc systemd[1]: Starting Kubernetes Kubelet... Jan 28 09:42:25 crc restorecon[4690]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:25 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 09:42:26 crc restorecon[4690]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 09:42:26 crc restorecon[4690]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 28 09:42:27 crc kubenswrapper[4735]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 09:42:27 crc kubenswrapper[4735]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 28 09:42:27 crc kubenswrapper[4735]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 09:42:27 crc kubenswrapper[4735]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 09:42:27 crc kubenswrapper[4735]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 28 09:42:27 crc kubenswrapper[4735]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.261854 4735 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.267996 4735 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268030 4735 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268040 4735 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268051 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268060 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268070 4735 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268078 4735 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268087 4735 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268096 4735 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268105 4735 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268113 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268121 4735 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268129 4735 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268138 4735 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268146 4735 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268155 4735 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268185 4735 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268194 4735 feature_gate.go:330] unrecognized feature gate: Example Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268203 4735 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268211 4735 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268221 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268232 4735 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268242 4735 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268252 4735 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268263 4735 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268274 4735 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268284 4735 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268293 4735 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268301 4735 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268311 4735 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268324 4735 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268336 4735 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268346 4735 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268356 4735 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268365 4735 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268375 4735 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268384 4735 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268393 4735 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268402 4735 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268411 4735 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268424 4735 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268438 4735 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268449 4735 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268463 4735 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268474 4735 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268487 4735 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268499 4735 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268509 4735 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268520 4735 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268530 4735 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268541 4735 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268553 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268565 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268574 4735 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268585 4735 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268596 4735 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268607 4735 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268618 4735 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268630 4735 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268639 4735 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268652 4735 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268662 4735 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268671 4735 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268679 4735 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268687 4735 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268696 4735 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268704 4735 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268712 4735 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268721 4735 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268729 4735 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.268737 4735 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.269983 4735 flags.go:64] FLAG: --address="0.0.0.0" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270014 4735 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270032 4735 flags.go:64] FLAG: --anonymous-auth="true" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270045 4735 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270057 4735 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270068 4735 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270081 4735 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270093 4735 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270103 4735 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270113 4735 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270124 4735 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270134 4735 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270144 4735 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270156 4735 flags.go:64] FLAG: --cgroup-root="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270165 4735 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270176 4735 flags.go:64] FLAG: --client-ca-file="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270185 4735 flags.go:64] FLAG: --cloud-config="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270196 4735 flags.go:64] FLAG: --cloud-provider="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270205 4735 flags.go:64] FLAG: --cluster-dns="[]" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270217 4735 flags.go:64] FLAG: --cluster-domain="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270226 4735 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270237 4735 flags.go:64] FLAG: --config-dir="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270246 4735 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270257 4735 flags.go:64] FLAG: --container-log-max-files="5" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270269 4735 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270279 4735 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270289 4735 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270299 4735 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270309 4735 flags.go:64] FLAG: --contention-profiling="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270319 4735 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270328 4735 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270339 4735 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270348 4735 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270361 4735 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270370 4735 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270380 4735 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270391 4735 flags.go:64] FLAG: --enable-load-reader="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270401 4735 flags.go:64] FLAG: --enable-server="true" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270410 4735 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270423 4735 flags.go:64] FLAG: --event-burst="100" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270434 4735 flags.go:64] FLAG: --event-qps="50" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270444 4735 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270454 4735 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270464 4735 flags.go:64] FLAG: --eviction-hard="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270475 4735 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270484 4735 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270494 4735 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270505 4735 flags.go:64] FLAG: --eviction-soft="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270515 4735 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270525 4735 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270534 4735 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270544 4735 flags.go:64] FLAG: --experimental-mounter-path="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270555 4735 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270565 4735 flags.go:64] FLAG: --fail-swap-on="true" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270575 4735 flags.go:64] FLAG: --feature-gates="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270586 4735 flags.go:64] FLAG: --file-check-frequency="20s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270596 4735 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270607 4735 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270617 4735 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270627 4735 flags.go:64] FLAG: --healthz-port="10248" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270637 4735 flags.go:64] FLAG: --help="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270647 4735 flags.go:64] FLAG: --hostname-override="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270657 4735 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270667 4735 flags.go:64] FLAG: --http-check-frequency="20s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270676 4735 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270686 4735 flags.go:64] FLAG: --image-credential-provider-config="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270735 4735 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270746 4735 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270757 4735 flags.go:64] FLAG: --image-service-endpoint="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270796 4735 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270806 4735 flags.go:64] FLAG: --kube-api-burst="100" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270816 4735 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270826 4735 flags.go:64] FLAG: --kube-api-qps="50" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270836 4735 flags.go:64] FLAG: --kube-reserved="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270846 4735 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270855 4735 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270865 4735 flags.go:64] FLAG: --kubelet-cgroups="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270875 4735 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270884 4735 flags.go:64] FLAG: --lock-file="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270894 4735 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270903 4735 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270913 4735 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270928 4735 flags.go:64] FLAG: --log-json-split-stream="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270937 4735 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270948 4735 flags.go:64] FLAG: --log-text-split-stream="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270957 4735 flags.go:64] FLAG: --logging-format="text" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270967 4735 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270977 4735 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270987 4735 flags.go:64] FLAG: --manifest-url="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.270998 4735 flags.go:64] FLAG: --manifest-url-header="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271011 4735 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271020 4735 flags.go:64] FLAG: --max-open-files="1000000" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271032 4735 flags.go:64] FLAG: --max-pods="110" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271042 4735 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271052 4735 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271061 4735 flags.go:64] FLAG: --memory-manager-policy="None" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271071 4735 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271083 4735 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271093 4735 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271102 4735 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271125 4735 flags.go:64] FLAG: --node-status-max-images="50" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271134 4735 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271144 4735 flags.go:64] FLAG: --oom-score-adj="-999" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271155 4735 flags.go:64] FLAG: --pod-cidr="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271164 4735 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271178 4735 flags.go:64] FLAG: --pod-manifest-path="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271188 4735 flags.go:64] FLAG: --pod-max-pids="-1" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271198 4735 flags.go:64] FLAG: --pods-per-core="0" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271207 4735 flags.go:64] FLAG: --port="10250" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271218 4735 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271227 4735 flags.go:64] FLAG: --provider-id="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271237 4735 flags.go:64] FLAG: --qos-reserved="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271247 4735 flags.go:64] FLAG: --read-only-port="10255" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271257 4735 flags.go:64] FLAG: --register-node="true" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271267 4735 flags.go:64] FLAG: --register-schedulable="true" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271276 4735 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271293 4735 flags.go:64] FLAG: --registry-burst="10" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271302 4735 flags.go:64] FLAG: --registry-qps="5" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271312 4735 flags.go:64] FLAG: --reserved-cpus="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271322 4735 flags.go:64] FLAG: --reserved-memory="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271334 4735 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271344 4735 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271354 4735 flags.go:64] FLAG: --rotate-certificates="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271363 4735 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271373 4735 flags.go:64] FLAG: --runonce="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271385 4735 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271395 4735 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271405 4735 flags.go:64] FLAG: --seccomp-default="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271415 4735 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271426 4735 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271436 4735 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271446 4735 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271456 4735 flags.go:64] FLAG: --storage-driver-password="root" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271465 4735 flags.go:64] FLAG: --storage-driver-secure="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271475 4735 flags.go:64] FLAG: --storage-driver-table="stats" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271485 4735 flags.go:64] FLAG: --storage-driver-user="root" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271494 4735 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271504 4735 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271514 4735 flags.go:64] FLAG: --system-cgroups="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271524 4735 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271539 4735 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271549 4735 flags.go:64] FLAG: --tls-cert-file="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271558 4735 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271569 4735 flags.go:64] FLAG: --tls-min-version="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271578 4735 flags.go:64] FLAG: --tls-private-key-file="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271587 4735 flags.go:64] FLAG: --topology-manager-policy="none" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271598 4735 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271607 4735 flags.go:64] FLAG: --topology-manager-scope="container" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271625 4735 flags.go:64] FLAG: --v="2" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271638 4735 flags.go:64] FLAG: --version="false" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271650 4735 flags.go:64] FLAG: --vmodule="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271661 4735 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.271672 4735 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.271937 4735 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.271948 4735 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.271958 4735 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.271968 4735 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.271978 4735 feature_gate.go:330] unrecognized feature gate: Example Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.271986 4735 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.271996 4735 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272005 4735 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272022 4735 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272031 4735 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272040 4735 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272052 4735 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272062 4735 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272072 4735 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272081 4735 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272090 4735 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272099 4735 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272107 4735 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272116 4735 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272125 4735 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272134 4735 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272142 4735 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272151 4735 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272160 4735 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272168 4735 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272177 4735 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272185 4735 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272198 4735 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272206 4735 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272215 4735 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272224 4735 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272234 4735 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272245 4735 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272270 4735 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272288 4735 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272304 4735 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272316 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272328 4735 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272339 4735 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272350 4735 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272361 4735 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272371 4735 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272382 4735 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272392 4735 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272405 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272420 4735 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272435 4735 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272448 4735 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272461 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272471 4735 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272482 4735 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272493 4735 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272503 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272513 4735 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272524 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272541 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272551 4735 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272562 4735 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272572 4735 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272589 4735 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272599 4735 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272609 4735 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272619 4735 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272629 4735 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272640 4735 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272651 4735 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272661 4735 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272671 4735 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272681 4735 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272691 4735 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.272701 4735 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.272735 4735 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.285727 4735 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.286152 4735 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286267 4735 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286280 4735 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286289 4735 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286298 4735 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286307 4735 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286314 4735 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286320 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286327 4735 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286333 4735 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286348 4735 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286354 4735 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286361 4735 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286368 4735 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286375 4735 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286381 4735 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286397 4735 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286404 4735 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286417 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286424 4735 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286430 4735 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286436 4735 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286442 4735 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286448 4735 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286454 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286472 4735 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286486 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286494 4735 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286504 4735 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286513 4735 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286521 4735 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286529 4735 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286536 4735 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286542 4735 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286549 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286556 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286563 4735 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286572 4735 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286583 4735 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286592 4735 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286599 4735 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286606 4735 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286615 4735 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286622 4735 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286629 4735 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286637 4735 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286644 4735 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286651 4735 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286658 4735 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286664 4735 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286672 4735 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286679 4735 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286686 4735 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286692 4735 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286699 4735 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286706 4735 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286714 4735 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286720 4735 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286747 4735 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286760 4735 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286789 4735 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286796 4735 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286802 4735 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286809 4735 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286815 4735 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286821 4735 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286828 4735 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286835 4735 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286842 4735 feature_gate.go:330] unrecognized feature gate: Example Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286851 4735 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286859 4735 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.286866 4735 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.286877 4735 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287100 4735 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287114 4735 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287123 4735 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287130 4735 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287137 4735 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287144 4735 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287151 4735 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287158 4735 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287164 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287171 4735 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287177 4735 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287186 4735 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287195 4735 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287202 4735 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287210 4735 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287217 4735 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287224 4735 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287233 4735 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287240 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287248 4735 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287255 4735 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287262 4735 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287268 4735 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287277 4735 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287283 4735 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287290 4735 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287297 4735 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287304 4735 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287310 4735 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287319 4735 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287327 4735 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287333 4735 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287339 4735 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.287346 4735 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288276 4735 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288291 4735 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288298 4735 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288307 4735 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288313 4735 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288321 4735 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288328 4735 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288335 4735 feature_gate.go:330] unrecognized feature gate: Example Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288341 4735 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288348 4735 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288355 4735 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288362 4735 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288369 4735 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288375 4735 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288384 4735 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288391 4735 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288398 4735 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288404 4735 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288412 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288418 4735 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288425 4735 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288432 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288438 4735 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288445 4735 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288453 4735 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288461 4735 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288468 4735 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288477 4735 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288483 4735 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288490 4735 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288498 4735 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288505 4735 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288513 4735 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288520 4735 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288527 4735 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288534 4735 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.288540 4735 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.288550 4735 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.288804 4735 server.go:940] "Client rotation is on, will bootstrap in background" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.294490 4735 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.294634 4735 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.296368 4735 server.go:997] "Starting client certificate rotation" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.296407 4735 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.297532 4735 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-29 04:20:44.807164978 +0000 UTC Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.297659 4735 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.323272 4735 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 09:42:27 crc kubenswrapper[4735]: E0128 09:42:27.328248 4735 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.328423 4735 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.348565 4735 log.go:25] "Validated CRI v1 runtime API" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.396118 4735 log.go:25] "Validated CRI v1 image API" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.398450 4735 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.403703 4735 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-28-09-32-26-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.403734 4735 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.422347 4735 manager.go:217] Machine: {Timestamp:2026-01-28 09:42:27.420431571 +0000 UTC m=+0.570095984 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:39f0a762-e2b9-4dea-b3e1-81189a88d57f BootID:bf1fdcc8-a86e-465f-b307-a31dff32dcb9 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:95:c0:66 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:95:c0:66 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:90:0f:05 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f7:2f:d8 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:fe:67:f4 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:df:a4:77 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:96:65:03:22:c4:6e Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:76:10:b3:04:ff Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.422662 4735 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.422933 4735 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.423567 4735 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.423991 4735 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.424063 4735 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.424927 4735 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.424972 4735 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.425569 4735 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.425615 4735 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.426464 4735 state_mem.go:36] "Initialized new in-memory state store" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.426614 4735 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.431154 4735 kubelet.go:418] "Attempting to sync node with API server" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.431209 4735 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.431275 4735 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.431306 4735 kubelet.go:324] "Adding apiserver pod source" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.431330 4735 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.436314 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 28 09:42:27 crc kubenswrapper[4735]: E0128 09:42:27.436437 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.436466 4735 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.437679 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 28 09:42:27 crc kubenswrapper[4735]: E0128 09:42:27.437827 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.438037 4735 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.441084 4735 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.442980 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.443023 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.443067 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.443080 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.443103 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.443117 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.443131 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.443153 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.443170 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.443184 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.443214 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.443230 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.444228 4735 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.445852 4735 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.445981 4735 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.445891 4735 server.go:1280] "Started kubelet" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.446692 4735 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.447304 4735 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 09:42:27 crc systemd[1]: Started Kubernetes Kubelet. Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.448970 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.449011 4735 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.449140 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 20:49:47.57700487 +0000 UTC Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.449184 4735 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.449194 4735 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.449250 4735 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 09:42:27 crc kubenswrapper[4735]: E0128 09:42:27.449223 4735 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.450721 4735 server.go:460] "Adding debug handlers to kubelet server" Jan 28 09:42:27 crc kubenswrapper[4735]: E0128 09:42:27.450815 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="200ms" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.452142 4735 factory.go:55] Registering systemd factory Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.452169 4735 factory.go:221] Registration of the systemd container factory successfully Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.453881 4735 factory.go:153] Registering CRI-O factory Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.453907 4735 factory.go:221] Registration of the crio container factory successfully Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.453985 4735 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.454016 4735 factory.go:103] Registering Raw factory Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.454035 4735 manager.go:1196] Started watching for new ooms in manager Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.454121 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 28 09:42:27 crc kubenswrapper[4735]: E0128 09:42:27.454324 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.454817 4735 manager.go:319] Starting recovery of all containers Jan 28 09:42:27 crc kubenswrapper[4735]: E0128 09:42:27.461463 4735 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.236:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188edbc61766e82b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 09:42:27.445811243 +0000 UTC m=+0.595475706,LastTimestamp:2026-01-28 09:42:27.445811243 +0000 UTC m=+0.595475706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.469956 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470014 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470030 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470045 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470071 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470083 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470096 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470108 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470122 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470134 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470747 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470796 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470818 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470834 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470877 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470889 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470903 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470914 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470927 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470938 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470950 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470961 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.470975 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471025 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471038 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471051 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471066 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471079 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471092 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471102 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471135 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471149 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471162 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471173 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471185 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471215 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471226 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471238 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471249 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471262 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471275 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.471292 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473157 4735 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473199 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473220 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473238 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473252 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473265 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473281 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473294 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473307 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473320 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473332 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473351 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473363 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473376 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473463 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473476 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473489 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473503 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473519 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473531 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473543 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473556 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473569 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473582 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473594 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473606 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473619 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473633 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473645 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473657 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473668 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473680 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473692 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473704 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473716 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473728 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473740 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473804 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473822 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473834 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473847 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473859 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473871 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473883 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473895 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473907 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473920 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473931 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473943 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473955 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473968 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473980 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.473992 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474004 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474017 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474030 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474050 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474069 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474088 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474109 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474133 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474155 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474175 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474201 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474223 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474246 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474268 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474288 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474309 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474330 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474353 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474373 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474395 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474414 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474434 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474453 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474471 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474489 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474509 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474527 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474546 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474566 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474584 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474603 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474622 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474640 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474660 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474678 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474698 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474718 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474743 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474836 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474859 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474882 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474903 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474924 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474942 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474960 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474979 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.474998 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475018 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475038 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475058 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475078 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475097 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475114 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475134 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475153 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475172 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475191 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475210 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475231 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475249 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475268 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475287 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475307 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475325 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475343 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475363 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475382 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475400 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475418 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475437 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475457 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475477 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475496 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475514 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475584 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475612 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475639 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475683 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475704 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475724 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475743 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475799 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475819 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475838 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475856 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475874 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475892 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475910 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475929 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475948 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475968 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.475988 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476006 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476025 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476045 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476065 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476084 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476105 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476123 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476144 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476163 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476181 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476200 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476218 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476238 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476267 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476286 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476307 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476325 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476343 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476362 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476381 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476400 4735 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476420 4735 reconstruct.go:97] "Volume reconstruction finished" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.476436 4735 reconciler.go:26] "Reconciler: start to sync state" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.488720 4735 manager.go:324] Recovery completed Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.499210 4735 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.501307 4735 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.501356 4735 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.501385 4735 kubelet.go:2335] "Starting kubelet main sync loop" Jan 28 09:42:27 crc kubenswrapper[4735]: E0128 09:42:27.501435 4735 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.504332 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.504418 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:27 crc kubenswrapper[4735]: E0128 09:42:27.504432 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.505652 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.505697 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.505711 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.507016 4735 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.507062 4735 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.507083 4735 state_mem.go:36] "Initialized new in-memory state store" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.531300 4735 policy_none.go:49] "None policy: Start" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.532566 4735 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.532673 4735 state_mem.go:35] "Initializing new in-memory state store" Jan 28 09:42:27 crc kubenswrapper[4735]: E0128 09:42:27.550381 4735 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.590360 4735 manager.go:334] "Starting Device Plugin manager" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.591655 4735 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.591677 4735 server.go:79] "Starting device plugin registration server" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.592304 4735 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.592329 4735 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.592544 4735 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.592723 4735 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.592764 4735 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.602396 4735 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.602506 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.603540 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.603585 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.603597 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.604043 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:27 crc kubenswrapper[4735]: E0128 09:42:27.604090 4735 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.604141 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.604182 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.604825 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.604849 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.604861 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.604920 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.604943 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.604954 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.605077 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.605178 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.605202 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.606061 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.606085 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.606094 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.606113 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.606138 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.606152 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.606322 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.606523 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.606607 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.607464 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.607492 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.607503 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.607598 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.607641 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.607663 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.607649 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.607709 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.607674 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.608279 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.608307 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.608318 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.608462 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.608491 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.608502 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.608509 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.608533 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.609124 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.609142 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.609150 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:27 crc kubenswrapper[4735]: E0128 09:42:27.651382 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="400ms" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.679137 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.679173 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.679189 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.679210 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.679226 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.679241 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.679278 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.679307 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.679323 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.679344 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.679371 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.679397 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.679422 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.679442 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.679459 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.693266 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.694481 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.694515 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.694525 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.694549 4735 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 09:42:27 crc kubenswrapper[4735]: E0128 09:42:27.695118 4735 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.236:6443: connect: connection refused" node="crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.781415 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.781502 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.781526 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.781562 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.781586 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.781605 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.781638 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.781697 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.781705 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.781728 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.781808 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.781815 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.781854 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.781651 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.781726 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.781722 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.781955 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.782025 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.782120 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.782179 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.782133 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.782236 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.782313 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.782345 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.782397 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.782422 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.782450 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.782479 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.782426 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.782620 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.896077 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.897397 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.897448 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.897457 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.897480 4735 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 09:42:27 crc kubenswrapper[4735]: E0128 09:42:27.897927 4735 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.236:6443: connect: connection refused" node="crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.923902 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.929577 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.951262 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.971561 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: I0128 09:42:27.977448 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.979150 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e6f21124a2716d4d7bc3663f99d2d4d56c74c4c5ac1d9308226841de74320f12 WatchSource:0}: Error finding container e6f21124a2716d4d7bc3663f99d2d4d56c74c4c5ac1d9308226841de74320f12: Status 404 returned error can't find the container with id e6f21124a2716d4d7bc3663f99d2d4d56c74c4c5ac1d9308226841de74320f12 Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.980005 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-9e9e8ed71bd56fa563d47bc64df18e6c9c010ac86471d37ba0d2a794113abc55 WatchSource:0}: Error finding container 9e9e8ed71bd56fa563d47bc64df18e6c9c010ac86471d37ba0d2a794113abc55: Status 404 returned error can't find the container with id 9e9e8ed71bd56fa563d47bc64df18e6c9c010ac86471d37ba0d2a794113abc55 Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.987187 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-0e037e81542b499b95629193ec77dabeb220c67853b701b05286e25c37898f3e WatchSource:0}: Error finding container 0e037e81542b499b95629193ec77dabeb220c67853b701b05286e25c37898f3e: Status 404 returned error can't find the container with id 0e037e81542b499b95629193ec77dabeb220c67853b701b05286e25c37898f3e Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.991155 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-37ed2a4cf6c75c588697f5136d6689b1684ad3478ca04fb57f83c175dfd5b7b5 WatchSource:0}: Error finding container 37ed2a4cf6c75c588697f5136d6689b1684ad3478ca04fb57f83c175dfd5b7b5: Status 404 returned error can't find the container with id 37ed2a4cf6c75c588697f5136d6689b1684ad3478ca04fb57f83c175dfd5b7b5 Jan 28 09:42:27 crc kubenswrapper[4735]: W0128 09:42:27.992951 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-0724dce1172941a4be564534328ea9d2bb0bd6dad7a49ce65d5d740404241966 WatchSource:0}: Error finding container 0724dce1172941a4be564534328ea9d2bb0bd6dad7a49ce65d5d740404241966: Status 404 returned error can't find the container with id 0724dce1172941a4be564534328ea9d2bb0bd6dad7a49ce65d5d740404241966 Jan 28 09:42:28 crc kubenswrapper[4735]: E0128 09:42:28.052818 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="800ms" Jan 28 09:42:28 crc kubenswrapper[4735]: I0128 09:42:28.298903 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:28 crc kubenswrapper[4735]: I0128 09:42:28.300539 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:28 crc kubenswrapper[4735]: I0128 09:42:28.300594 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:28 crc kubenswrapper[4735]: I0128 09:42:28.300612 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:28 crc kubenswrapper[4735]: I0128 09:42:28.300648 4735 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 09:42:28 crc kubenswrapper[4735]: E0128 09:42:28.301101 4735 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.236:6443: connect: connection refused" node="crc" Jan 28 09:42:28 crc kubenswrapper[4735]: I0128 09:42:28.447069 4735 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 28 09:42:28 crc kubenswrapper[4735]: W0128 09:42:28.448608 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 28 09:42:28 crc kubenswrapper[4735]: E0128 09:42:28.448688 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 28 09:42:28 crc kubenswrapper[4735]: I0128 09:42:28.449651 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:38:25.519517913 +0000 UTC Jan 28 09:42:28 crc kubenswrapper[4735]: I0128 09:42:28.508863 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"37ed2a4cf6c75c588697f5136d6689b1684ad3478ca04fb57f83c175dfd5b7b5"} Jan 28 09:42:28 crc kubenswrapper[4735]: I0128 09:42:28.509930 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0e037e81542b499b95629193ec77dabeb220c67853b701b05286e25c37898f3e"} Jan 28 09:42:28 crc kubenswrapper[4735]: I0128 09:42:28.511071 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9e9e8ed71bd56fa563d47bc64df18e6c9c010ac86471d37ba0d2a794113abc55"} Jan 28 09:42:28 crc kubenswrapper[4735]: I0128 09:42:28.511883 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e6f21124a2716d4d7bc3663f99d2d4d56c74c4c5ac1d9308226841de74320f12"} Jan 28 09:42:28 crc kubenswrapper[4735]: I0128 09:42:28.512931 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0724dce1172941a4be564534328ea9d2bb0bd6dad7a49ce65d5d740404241966"} Jan 28 09:42:28 crc kubenswrapper[4735]: W0128 09:42:28.672853 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 28 09:42:28 crc kubenswrapper[4735]: E0128 09:42:28.672952 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 28 09:42:28 crc kubenswrapper[4735]: W0128 09:42:28.690186 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 28 09:42:28 crc kubenswrapper[4735]: E0128 09:42:28.690328 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 28 09:42:28 crc kubenswrapper[4735]: E0128 09:42:28.854117 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="1.6s" Jan 28 09:42:29 crc kubenswrapper[4735]: W0128 09:42:29.015694 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 28 09:42:29 crc kubenswrapper[4735]: E0128 09:42:29.016261 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.101590 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.103335 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.103395 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.103413 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.103458 4735 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 09:42:29 crc kubenswrapper[4735]: E0128 09:42:29.104012 4735 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.236:6443: connect: connection refused" node="crc" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.401700 4735 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 09:42:29 crc kubenswrapper[4735]: E0128 09:42:29.403287 4735 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.447515 4735 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.449985 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 03:07:28.46694738 +0000 UTC Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.520892 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d"} Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.520987 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0"} Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.521019 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6"} Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.521044 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118"} Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.521185 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.522286 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.522326 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.522339 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.524839 4735 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b" exitCode=0 Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.524963 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.524974 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b"} Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.526279 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.526343 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.526360 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.527737 4735 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0" exitCode=0 Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.527857 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0"} Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.528128 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.528182 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.529645 4735 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f13f4c1c3be6b46bf13d17cb41160b21fb1c044fb20ee06fa1317583c71fdb05" exitCode=0 Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.529723 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.529732 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f13f4c1c3be6b46bf13d17cb41160b21fb1c044fb20ee06fa1317583c71fdb05"} Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.529775 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.529826 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.529916 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.529921 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.529935 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.530036 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.531215 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.531255 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.531271 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.532095 4735 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3" exitCode=0 Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.532151 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3"} Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.532231 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.534211 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.534254 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.534274 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:29 crc kubenswrapper[4735]: I0128 09:42:29.819721 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:42:30 crc kubenswrapper[4735]: W0128 09:42:30.355449 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 28 09:42:30 crc kubenswrapper[4735]: E0128 09:42:30.355662 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.446804 4735 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.450976 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:14:55.84480425 +0000 UTC Jan 28 09:42:30 crc kubenswrapper[4735]: E0128 09:42:30.455781 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="3.2s" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.541726 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"654b295d2e5a1a1b46320511f00de492771e80391afdfffab5f0b50fadbc15fc"} Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.541827 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.542910 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.542948 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.542970 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.544341 4735 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ac732e1f927aebb1514996364c8ed6eaf6986d1eeab2c2b069744681e4e28307" exitCode=0 Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.544470 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.544465 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ac732e1f927aebb1514996364c8ed6eaf6986d1eeab2c2b069744681e4e28307"} Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.547811 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.547838 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.547848 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.550232 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"310352af4bb9dde2cef662df6ca4b2eaf8472a24caa9a025445637c4b9494267"} Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.550272 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"87c9b3ab7d1f9b65a25542cad6c28dbbbaa7c667c465d1cd4fa8541878651e65"} Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.550287 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bfd947878931d41a8e267e14f37082e5422d1657e0f7ccff5e5a21370073564f"} Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.550316 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.551462 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.551489 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.551501 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.555740 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0"} Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.555799 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f"} Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.555813 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5"} Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.555825 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173"} Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.555848 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.556965 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.556999 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.557015 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:30 crc kubenswrapper[4735]: E0128 09:42:30.697861 4735 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.236:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188edbc61766e82b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 09:42:27.445811243 +0000 UTC m=+0.595475706,LastTimestamp:2026-01-28 09:42:27.445811243 +0000 UTC m=+0.595475706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.704592 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.707074 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.707143 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.707158 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:30 crc kubenswrapper[4735]: I0128 09:42:30.707200 4735 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 09:42:30 crc kubenswrapper[4735]: E0128 09:42:30.708021 4735 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.236:6443: connect: connection refused" node="crc" Jan 28 09:42:30 crc kubenswrapper[4735]: W0128 09:42:30.755197 4735 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.236:6443: connect: connection refused Jan 28 09:42:30 crc kubenswrapper[4735]: E0128 09:42:30.755401 4735 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.236:6443: connect: connection refused" logger="UnhandledError" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.451810 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:29:38.111106934 +0000 UTC Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.563688 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7"} Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.564442 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.566671 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.566959 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.566901 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"877d7e81ccc5eac9e8c72cc551b8b8cedf1c59ea178d681fb825c60e2ac1446a"} Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.566861 4735 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="877d7e81ccc5eac9e8c72cc551b8b8cedf1c59ea178d681fb825c60e2ac1446a" exitCode=0 Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.567074 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.567432 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.567457 4735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.567522 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.567460 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.567197 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.569379 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.569416 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.569435 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.569439 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.569478 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.569498 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.569568 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.569600 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.569617 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.569906 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.569933 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:31 crc kubenswrapper[4735]: I0128 09:42:31.569948 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.310083 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.453058 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 13:38:10.854500868 +0000 UTC Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.574445 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a3f623106c85d44ba8c68c57939a87bc003be64f1aa9a0cca3a90a89b4ad2204"} Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.574507 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.574512 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"db674fa6425807755e3c03b20c1c9b6c473908e0c0f5346bf8b029a38666f585"} Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.574544 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"02593b4310b50b6115d4fc9fcb9e75f22262b00aea6d5f8457f88cf22dc3a4b1"} Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.574566 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"774d11be6ced02080d6f754fb9b3d9a6346873ecf9a2a0ad28849fc8161429ea"} Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.574640 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.575382 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.575420 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.575431 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.578888 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.578981 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.579766 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.579803 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.579817 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.820490 4735 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 09:42:32 crc kubenswrapper[4735]: I0128 09:42:32.820589 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.389377 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.454220 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:58:48.407974634 +0000 UTC Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.500712 4735 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.586885 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4274cd42e013260d9bf55768abb71dd01907b891de1cf23fa389f5c5148fc0d4"} Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.587050 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.587076 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.587174 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.588812 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.588974 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.589051 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.588826 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.589171 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.589184 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.588905 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.589299 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.589315 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.908873 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.910626 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.910866 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.911034 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:33 crc kubenswrapper[4735]: I0128 09:42:33.911215 4735 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 09:42:34 crc kubenswrapper[4735]: I0128 09:42:34.228602 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:34 crc kubenswrapper[4735]: I0128 09:42:34.454349 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 11:22:36.859908351 +0000 UTC Jan 28 09:42:34 crc kubenswrapper[4735]: I0128 09:42:34.589708 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:34 crc kubenswrapper[4735]: I0128 09:42:34.589711 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:34 crc kubenswrapper[4735]: I0128 09:42:34.590737 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:34 crc kubenswrapper[4735]: I0128 09:42:34.590852 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:34 crc kubenswrapper[4735]: I0128 09:42:34.590866 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:34 crc kubenswrapper[4735]: I0128 09:42:34.590966 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:34 crc kubenswrapper[4735]: I0128 09:42:34.590989 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:34 crc kubenswrapper[4735]: I0128 09:42:34.591000 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:34 crc kubenswrapper[4735]: I0128 09:42:34.776195 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:42:34 crc kubenswrapper[4735]: I0128 09:42:34.776372 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:34 crc kubenswrapper[4735]: I0128 09:42:34.778090 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:34 crc kubenswrapper[4735]: I0128 09:42:34.778151 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:34 crc kubenswrapper[4735]: I0128 09:42:34.778161 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:34 crc kubenswrapper[4735]: I0128 09:42:34.783870 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:42:35 crc kubenswrapper[4735]: I0128 09:42:35.214555 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 28 09:42:35 crc kubenswrapper[4735]: I0128 09:42:35.455471 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:50:26.571678008 +0000 UTC Jan 28 09:42:35 crc kubenswrapper[4735]: I0128 09:42:35.592586 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:35 crc kubenswrapper[4735]: I0128 09:42:35.592723 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:35 crc kubenswrapper[4735]: I0128 09:42:35.593772 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:35 crc kubenswrapper[4735]: I0128 09:42:35.593809 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:35 crc kubenswrapper[4735]: I0128 09:42:35.593820 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:35 crc kubenswrapper[4735]: I0128 09:42:35.597025 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:35 crc kubenswrapper[4735]: I0128 09:42:35.597094 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:35 crc kubenswrapper[4735]: I0128 09:42:35.597118 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:36 crc kubenswrapper[4735]: I0128 09:42:36.614521 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:52:19.580656039 +0000 UTC Jan 28 09:42:36 crc kubenswrapper[4735]: I0128 09:42:36.644146 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 28 09:42:36 crc kubenswrapper[4735]: I0128 09:42:36.644380 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:36 crc kubenswrapper[4735]: I0128 09:42:36.645870 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:36 crc kubenswrapper[4735]: I0128 09:42:36.645933 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:36 crc kubenswrapper[4735]: I0128 09:42:36.645951 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:36 crc kubenswrapper[4735]: I0128 09:42:36.968803 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 09:42:36 crc kubenswrapper[4735]: I0128 09:42:36.969096 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:36 crc kubenswrapper[4735]: I0128 09:42:36.970490 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:36 crc kubenswrapper[4735]: I0128 09:42:36.970531 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:36 crc kubenswrapper[4735]: I0128 09:42:36.970541 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:37 crc kubenswrapper[4735]: E0128 09:42:37.604221 4735 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 09:42:37 crc kubenswrapper[4735]: I0128 09:42:37.614949 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:16:58.349167147 +0000 UTC Jan 28 09:42:38 crc kubenswrapper[4735]: I0128 09:42:38.615463 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:54:06.600169761 +0000 UTC Jan 28 09:42:39 crc kubenswrapper[4735]: I0128 09:42:39.615959 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 16:20:22.832520242 +0000 UTC Jan 28 09:42:40 crc kubenswrapper[4735]: I0128 09:42:40.616342 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:24:36.279721065 +0000 UTC Jan 28 09:42:41 crc kubenswrapper[4735]: I0128 09:42:41.200410 4735 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 09:42:41 crc kubenswrapper[4735]: I0128 09:42:41.200676 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 09:42:41 crc kubenswrapper[4735]: I0128 09:42:41.207435 4735 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 09:42:41 crc kubenswrapper[4735]: I0128 09:42:41.207533 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 09:42:41 crc kubenswrapper[4735]: I0128 09:42:41.616677 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 22:35:49.734660024 +0000 UTC Jan 28 09:42:42 crc kubenswrapper[4735]: I0128 09:42:42.315994 4735 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]log ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]etcd ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/generic-apiserver-start-informers ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/priority-and-fairness-filter ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/start-apiextensions-informers ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/start-apiextensions-controllers ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/crd-informer-synced ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/start-system-namespaces-controller ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 28 09:42:42 crc kubenswrapper[4735]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/bootstrap-controller ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/start-kube-aggregator-informers ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/apiservice-registration-controller ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/apiservice-discovery-controller ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]autoregister-completion ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/apiservice-openapi-controller ok Jan 28 09:42:42 crc kubenswrapper[4735]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 28 09:42:42 crc kubenswrapper[4735]: livez check failed Jan 28 09:42:42 crc kubenswrapper[4735]: I0128 09:42:42.316059 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 09:42:42 crc kubenswrapper[4735]: I0128 09:42:42.585028 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:42:42 crc kubenswrapper[4735]: I0128 09:42:42.585238 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:42 crc kubenswrapper[4735]: I0128 09:42:42.586707 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:42 crc kubenswrapper[4735]: I0128 09:42:42.586774 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:42 crc kubenswrapper[4735]: I0128 09:42:42.586788 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:42 crc kubenswrapper[4735]: I0128 09:42:42.617463 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:31:02.511567912 +0000 UTC Jan 28 09:42:42 crc kubenswrapper[4735]: I0128 09:42:42.821668 4735 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 09:42:42 crc kubenswrapper[4735]: I0128 09:42:42.821842 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 09:42:43 crc kubenswrapper[4735]: I0128 09:42:43.617875 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 19:04:48.133970624 +0000 UTC Jan 28 09:42:44 crc kubenswrapper[4735]: I0128 09:42:44.618617 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:11:32.783368466 +0000 UTC Jan 28 09:42:45 crc kubenswrapper[4735]: I0128 09:42:45.238051 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 28 09:42:45 crc kubenswrapper[4735]: I0128 09:42:45.238314 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:45 crc kubenswrapper[4735]: I0128 09:42:45.248395 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:45 crc kubenswrapper[4735]: I0128 09:42:45.248451 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:45 crc kubenswrapper[4735]: I0128 09:42:45.248462 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:45 crc kubenswrapper[4735]: I0128 09:42:45.256318 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 28 09:42:45 crc kubenswrapper[4735]: I0128 09:42:45.619367 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:04:27.78155521 +0000 UTC Jan 28 09:42:45 crc kubenswrapper[4735]: I0128 09:42:45.644123 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:45 crc kubenswrapper[4735]: I0128 09:42:45.645884 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:45 crc kubenswrapper[4735]: I0128 09:42:45.645947 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:45 crc kubenswrapper[4735]: I0128 09:42:45.645966 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.199267 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.206076 4735 trace.go:236] Trace[1200746472]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 09:42:35.228) (total time: 10977ms): Jan 28 09:42:46 crc kubenswrapper[4735]: Trace[1200746472]: ---"Objects listed" error: 10977ms (09:42:46.205) Jan 28 09:42:46 crc kubenswrapper[4735]: Trace[1200746472]: [10.977173114s] [10.977173114s] END Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.206118 4735 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.239088 4735 trace.go:236] Trace[28771026]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 09:42:31.208) (total time: 15030ms): Jan 28 09:42:46 crc kubenswrapper[4735]: Trace[28771026]: ---"Objects listed" error: 15030ms (09:42:46.238) Jan 28 09:42:46 crc kubenswrapper[4735]: Trace[28771026]: [15.030520158s] [15.030520158s] END Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.239148 4735 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.239130 4735 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.242588 4735 trace.go:236] Trace[727902634]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 09:42:35.679) (total time: 10562ms): Jan 28 09:42:46 crc kubenswrapper[4735]: Trace[727902634]: ---"Objects listed" error: 10562ms (09:42:46.242) Jan 28 09:42:46 crc kubenswrapper[4735]: Trace[727902634]: [10.562732915s] [10.562732915s] END Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.242620 4735 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.243126 4735 trace.go:236] Trace[529134643]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 09:42:31.287) (total time: 14955ms): Jan 28 09:42:46 crc kubenswrapper[4735]: Trace[529134643]: ---"Objects listed" error: 14955ms (09:42:46.242) Jan 28 09:42:46 crc kubenswrapper[4735]: Trace[529134643]: [14.955840769s] [14.955840769s] END Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.243163 4735 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.254232 4735 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.261792 4735 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.301235 4735 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52340->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.301326 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52340->192.168.126.11:17697: read: connection reset by peer" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.620106 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 11:55:40.889532399 +0000 UTC Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.620559 4735 apiserver.go:52] "Watching apiserver" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.625056 4735 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.625258 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.625583 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.625655 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.625652 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.626089 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.626201 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.626202 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.626633 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.626682 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.626729 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.628281 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.628539 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.628856 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.628912 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.629045 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.629072 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.629107 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.629170 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.629169 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.649580 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.650113 4735 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.652137 4735 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7" exitCode=255 Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.652196 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7"} Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.654705 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.654739 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.654788 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.654809 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.654853 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.654917 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.654935 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.654997 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655014 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655051 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655074 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655089 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655106 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655146 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655165 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655298 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655320 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655340 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655359 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655376 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655392 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655408 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655426 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655446 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655463 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655482 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655500 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655519 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655537 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655556 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655573 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655590 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655607 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655648 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655669 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655686 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655736 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655824 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655847 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655870 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655859 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655890 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655912 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655932 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655949 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655967 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.655983 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.656023 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.656051 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.656074 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.656070 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.656100 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.656119 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.656137 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.656154 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.656172 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.656189 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.656206 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.656222 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.656829 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.656849 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.657356 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.657447 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.657499 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.657699 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.657823 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.657931 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.656240 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.658019 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.658252 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.658285 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.658310 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.658017 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.658255 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.661718 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.658359 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.658547 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.658664 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:42:47.158637336 +0000 UTC m=+20.308301929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.661889 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.661937 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.661971 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662006 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662036 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662075 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662108 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662140 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662181 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662220 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662213 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662303 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662336 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662368 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662398 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662503 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662584 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662622 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662651 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662707 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662757 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662791 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662832 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662890 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663170 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663199 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663226 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663892 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663936 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663962 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663994 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664364 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664467 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664499 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664526 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664556 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664586 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664622 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664811 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664844 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664878 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664935 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664970 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664996 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665032 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665064 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665093 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665125 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665149 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665179 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665210 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665239 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665270 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665298 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665325 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665366 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665394 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665425 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665456 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665485 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665519 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665547 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665573 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665603 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665641 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665667 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665697 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665728 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665773 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665802 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665830 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665863 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665896 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665924 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665954 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665985 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666013 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666044 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666071 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666116 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666144 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.662736 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663114 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666174 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.658926 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.658978 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.659008 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666206 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666241 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666274 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666300 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666327 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666358 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666386 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666416 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666446 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666898 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667076 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667140 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667230 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667302 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667367 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667401 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667430 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667499 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667531 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667563 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667687 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667728 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667771 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667804 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667837 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667870 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667902 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667934 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667966 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668000 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668040 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668077 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668123 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668164 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668209 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668245 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668274 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668312 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668362 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668400 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668432 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668461 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668539 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668571 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668603 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668637 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668670 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668701 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668733 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668778 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668813 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668845 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668881 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.668934 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.669011 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.669051 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.669087 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.669136 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.669171 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.669220 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.669276 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.669326 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.669357 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.669386 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.669418 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.669461 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.669501 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.669531 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.669867 4735 scope.go:117] "RemoveContainer" containerID="c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.670030 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671020 4735 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671048 4735 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671067 4735 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671082 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671096 4735 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671110 4735 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671126 4735 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671142 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671157 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671173 4735 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671190 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671203 4735 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671215 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671230 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671240 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671250 4735 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671261 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671271 4735 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671282 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671292 4735 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671302 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671954 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.659042 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.673082 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.659369 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.659789 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.659815 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.659911 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.660008 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.660148 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.660391 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.660400 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.660447 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.660576 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.660916 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.661097 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.661106 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.661203 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.661592 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.661516 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663135 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663172 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663406 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.658648 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663494 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663512 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663641 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663696 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663712 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663882 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.663963 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664066 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664099 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664111 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664141 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.664528 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665035 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665246 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665426 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665481 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665725 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665827 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.665847 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666209 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666536 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666583 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666789 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666890 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666910 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666915 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.666938 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667133 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.667220 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671518 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671521 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.671735 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.673016 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.673928 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.673930 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.674383 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.674445 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.674708 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.674906 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.676435 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.676496 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.676849 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.676934 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.678228 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.678835 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.679246 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.679286 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.679314 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.679835 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.680388 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.680437 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.680514 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.680516 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.680591 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.681179 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.681543 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.682008 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.682007 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.682005 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.683309 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.683923 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.684013 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.684149 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.684315 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.684564 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.684588 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.685207 4735 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.685873 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 09:42:47.185844251 +0000 UTC m=+20.335508814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.686339 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.686511 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.686854 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.687013 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.687183 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.687205 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.687222 4735 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.687655 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.687801 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.687832 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.687851 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.688322 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.688326 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.688580 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.688611 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.688649 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.688872 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.689236 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.689590 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.690005 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.690136 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.690196 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.690333 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.690357 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.690535 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.690653 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.691046 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.691092 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.691215 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.691234 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.691334 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.691605 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.691808 4735 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.691932 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.692256 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 09:42:47.192232574 +0000 UTC m=+20.341897167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.695186 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.695555 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.696363 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.696563 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.696608 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.696756 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.696964 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.697159 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.697511 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.697841 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.701901 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.702278 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.702420 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.702435 4735 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.702557 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 09:42:47.202528692 +0000 UTC m=+20.352193065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.702300 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.703006 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.704295 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.707044 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.707376 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.707973 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.709549 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.709591 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.709630 4735 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:46 crc kubenswrapper[4735]: E0128 09:42:46.709713 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 09:42:47.209677575 +0000 UTC m=+20.359342018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.710718 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.713591 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.719136 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.720327 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.720400 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.720534 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.720570 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.720613 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.720812 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.722526 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.722914 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.723128 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.723481 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.724064 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.724400 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.724477 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.724882 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.725032 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.725125 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.725257 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.725856 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.725872 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.725912 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.726160 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.726337 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.726371 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.726685 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.726647 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.727278 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.727290 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.728077 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.728160 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.728349 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.728651 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.729436 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.729451 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.729596 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.729633 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.730050 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.730094 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.730232 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.730855 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.731371 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.743427 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.748272 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.749687 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.753777 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.756575 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.769549 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.777989 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778051 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778248 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778267 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778287 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778304 4735 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778318 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778337 4735 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778352 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778386 4735 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778401 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778419 4735 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778432 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778449 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778463 4735 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778482 4735 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778496 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778510 4735 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778529 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778543 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778558 4735 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778573 4735 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778606 4735 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778619 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778633 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778646 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778665 4735 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778677 4735 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778690 4735 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778703 4735 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778737 4735 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778773 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778805 4735 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778825 4735 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778841 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778877 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778891 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778909 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778922 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778934 4735 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778948 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778965 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.778995 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779008 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779021 4735 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779037 4735 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779235 4735 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779247 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779264 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779276 4735 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779289 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779302 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779324 4735 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779337 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779356 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779348 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779370 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779389 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779405 4735 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779417 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779424 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779472 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779487 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779501 4735 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779514 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779533 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779547 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779560 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779590 4735 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779608 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779623 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779636 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779650 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779668 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779688 4735 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779700 4735 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779723 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779736 4735 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779767 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779782 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779798 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779810 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779823 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779835 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779851 4735 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779869 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779888 4735 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779903 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779921 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779935 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.779948 4735 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780005 4735 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780020 4735 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780043 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780065 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780091 4735 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780110 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780126 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780140 4735 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780159 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780174 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780186 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780204 4735 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780217 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780231 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780245 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780266 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780279 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780291 4735 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780304 4735 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780320 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780332 4735 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780344 4735 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780356 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780375 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780392 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780406 4735 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780425 4735 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780439 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780452 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780465 4735 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780485 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780499 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780514 4735 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780527 4735 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780547 4735 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780560 4735 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780573 4735 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780589 4735 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780602 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780615 4735 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780628 4735 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780644 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780657 4735 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780669 4735 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780682 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780698 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780710 4735 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780725 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780739 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780775 4735 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780787 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780800 4735 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780817 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780829 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780841 4735 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780853 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780869 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780887 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780906 4735 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780921 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780937 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780950 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780963 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780976 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.780993 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781006 4735 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781020 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781067 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781080 4735 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781093 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781106 4735 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781122 4735 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781135 4735 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781148 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781160 4735 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781177 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781190 4735 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781204 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781221 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781234 4735 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781246 4735 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781259 4735 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781275 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781287 4735 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.781301 4735 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.788629 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.801174 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.810896 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.939218 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.946838 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 09:42:46 crc kubenswrapper[4735]: I0128 09:42:46.955056 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 09:42:46 crc kubenswrapper[4735]: W0128 09:42:46.957329 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-9ebef29b316e2e238d50767ca4cba5f86c877d073e7f21df1bf9477634e61c87 WatchSource:0}: Error finding container 9ebef29b316e2e238d50767ca4cba5f86c877d073e7f21df1bf9477634e61c87: Status 404 returned error can't find the container with id 9ebef29b316e2e238d50767ca4cba5f86c877d073e7f21df1bf9477634e61c87 Jan 28 09:42:46 crc kubenswrapper[4735]: W0128 09:42:46.971023 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-a1b79f5a69d6d8a2c992c5b3dda63068ddbdc8b6d363f868850cc14483350251 WatchSource:0}: Error finding container a1b79f5a69d6d8a2c992c5b3dda63068ddbdc8b6d363f868850cc14483350251: Status 404 returned error can't find the container with id a1b79f5a69d6d8a2c992c5b3dda63068ddbdc8b6d363f868850cc14483350251 Jan 28 09:42:46 crc kubenswrapper[4735]: W0128 09:42:46.971403 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-69727a400faa03ac92cb3c21ffe843b90de63169402d0d30098bd97523cfbae5 WatchSource:0}: Error finding container 69727a400faa03ac92cb3c21ffe843b90de63169402d0d30098bd97523cfbae5: Status 404 returned error can't find the container with id 69727a400faa03ac92cb3c21ffe843b90de63169402d0d30098bd97523cfbae5 Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.183797 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:42:47 crc kubenswrapper[4735]: E0128 09:42:47.184032 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:42:48.183999563 +0000 UTC m=+21.333663956 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.284637 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.284709 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.284773 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:42:47 crc kubenswrapper[4735]: E0128 09:42:47.284796 4735 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 09:42:47 crc kubenswrapper[4735]: E0128 09:42:47.284894 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 09:42:48.284875271 +0000 UTC m=+21.434539644 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.284808 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:42:47 crc kubenswrapper[4735]: E0128 09:42:47.284905 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 09:42:47 crc kubenswrapper[4735]: E0128 09:42:47.284980 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 09:42:47 crc kubenswrapper[4735]: E0128 09:42:47.285002 4735 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:47 crc kubenswrapper[4735]: E0128 09:42:47.284933 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 09:42:47 crc kubenswrapper[4735]: E0128 09:42:47.285098 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 09:42:48.285054845 +0000 UTC m=+21.434719298 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:47 crc kubenswrapper[4735]: E0128 09:42:47.285041 4735 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 09:42:47 crc kubenswrapper[4735]: E0128 09:42:47.285300 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 09:42:48.285281671 +0000 UTC m=+21.434946044 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 09:42:47 crc kubenswrapper[4735]: E0128 09:42:47.292044 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 09:42:47 crc kubenswrapper[4735]: E0128 09:42:47.292103 4735 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:47 crc kubenswrapper[4735]: E0128 09:42:47.292238 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 09:42:48.292203218 +0000 UTC m=+21.441867771 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.316201 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.333530 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.347410 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.359915 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.373487 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.385692 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.399212 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.413420 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.507391 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.508565 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.511146 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.512531 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.514667 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.515449 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.516324 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.517669 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.518670 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.520035 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.520715 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.522002 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.522322 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.523459 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.524454 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.525844 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.526483 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.527577 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.528101 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.528788 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.530270 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.530802 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.531871 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.532550 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.533626 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.534204 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.534893 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.535675 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.536517 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.537032 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.538230 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.539139 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.540219 4735 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.540347 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.542253 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.546678 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.547307 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.548424 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.548479 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.549143 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.549671 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.550362 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.551198 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.551847 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.552679 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.553585 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.554499 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.555110 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.557177 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.558800 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.559563 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.560512 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.561265 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.562340 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.563016 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.563698 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.565044 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.565659 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.572563 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.587470 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.600114 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.620801 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:20:01.733395864 +0000 UTC Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.657461 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7"} Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.657946 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"69727a400faa03ac92cb3c21ffe843b90de63169402d0d30098bd97523cfbae5"} Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.660982 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb"} Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.661044 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8"} Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.661059 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9ebef29b316e2e238d50767ca4cba5f86c877d073e7f21df1bf9477634e61c87"} Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.663061 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.665335 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9"} Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.665741 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.666101 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a1b79f5a69d6d8a2c992c5b3dda63068ddbdc8b6d363f868850cc14483350251"} Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.670210 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.673432 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.689171 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.703167 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.720499 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.737238 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.750334 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.774062 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.792109 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.810325 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.831503 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.854568 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.873247 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.892667 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:47 crc kubenswrapper[4735]: I0128 09:42:47.908721 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:48 crc kubenswrapper[4735]: I0128 09:42:48.193177 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:42:48 crc kubenswrapper[4735]: E0128 09:42:48.193355 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:42:50.193322308 +0000 UTC m=+23.342986681 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:42:48 crc kubenswrapper[4735]: I0128 09:42:48.294792 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:42:48 crc kubenswrapper[4735]: I0128 09:42:48.294851 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:42:48 crc kubenswrapper[4735]: I0128 09:42:48.294878 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:42:48 crc kubenswrapper[4735]: I0128 09:42:48.294903 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:42:48 crc kubenswrapper[4735]: E0128 09:42:48.295028 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 09:42:48 crc kubenswrapper[4735]: E0128 09:42:48.295054 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 09:42:48 crc kubenswrapper[4735]: E0128 09:42:48.295070 4735 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:48 crc kubenswrapper[4735]: E0128 09:42:48.295087 4735 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 09:42:48 crc kubenswrapper[4735]: E0128 09:42:48.295093 4735 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 09:42:48 crc kubenswrapper[4735]: E0128 09:42:48.295120 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 09:42:50.29510582 +0000 UTC m=+23.444770183 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:48 crc kubenswrapper[4735]: E0128 09:42:48.295028 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 09:42:48 crc kubenswrapper[4735]: E0128 09:42:48.295264 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 09:42:50.295232024 +0000 UTC m=+23.444896547 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 09:42:48 crc kubenswrapper[4735]: E0128 09:42:48.295274 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 09:42:48 crc kubenswrapper[4735]: E0128 09:42:48.295287 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 09:42:50.295279355 +0000 UTC m=+23.444943728 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 09:42:48 crc kubenswrapper[4735]: E0128 09:42:48.295292 4735 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:48 crc kubenswrapper[4735]: E0128 09:42:48.295342 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 09:42:50.295328476 +0000 UTC m=+23.444993059 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:48 crc kubenswrapper[4735]: I0128 09:42:48.502017 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:42:48 crc kubenswrapper[4735]: I0128 09:42:48.502169 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:42:48 crc kubenswrapper[4735]: I0128 09:42:48.502144 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:42:48 crc kubenswrapper[4735]: E0128 09:42:48.502273 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:42:48 crc kubenswrapper[4735]: E0128 09:42:48.502446 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:42:48 crc kubenswrapper[4735]: E0128 09:42:48.502635 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:42:48 crc kubenswrapper[4735]: I0128 09:42:48.621471 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:26:34.865842804 +0000 UTC Jan 28 09:42:49 crc kubenswrapper[4735]: I0128 09:42:49.622223 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:16:11.057063158 +0000 UTC Jan 28 09:42:49 crc kubenswrapper[4735]: I0128 09:42:49.825624 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:42:49 crc kubenswrapper[4735]: I0128 09:42:49.830527 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:42:49 crc kubenswrapper[4735]: I0128 09:42:49.835647 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 28 09:42:49 crc kubenswrapper[4735]: I0128 09:42:49.841297 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:49Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:49 crc kubenswrapper[4735]: I0128 09:42:49.860205 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:49Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:49 crc kubenswrapper[4735]: I0128 09:42:49.878648 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:49Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:49 crc kubenswrapper[4735]: I0128 09:42:49.895379 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:49Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:49 crc kubenswrapper[4735]: I0128 09:42:49.909662 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:49Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:49 crc kubenswrapper[4735]: I0128 09:42:49.927093 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:49Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:49 crc kubenswrapper[4735]: I0128 09:42:49.942044 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:49Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:49 crc kubenswrapper[4735]: I0128 09:42:49.958321 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:49Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:49 crc kubenswrapper[4735]: I0128 09:42:49.976890 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:49Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:49 crc kubenswrapper[4735]: I0128 09:42:49.990293 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:49Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.002236 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:49Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.016434 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:50Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.031299 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:50Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.046459 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:50Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.059145 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:50Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.212336 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:42:50 crc kubenswrapper[4735]: E0128 09:42:50.212490 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:42:54.21243932 +0000 UTC m=+27.362103693 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.312979 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.313043 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.313067 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.313087 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:42:50 crc kubenswrapper[4735]: E0128 09:42:50.313120 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 09:42:50 crc kubenswrapper[4735]: E0128 09:42:50.313142 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 09:42:50 crc kubenswrapper[4735]: E0128 09:42:50.313154 4735 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:50 crc kubenswrapper[4735]: E0128 09:42:50.313163 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 09:42:50 crc kubenswrapper[4735]: E0128 09:42:50.313182 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 09:42:50 crc kubenswrapper[4735]: E0128 09:42:50.313194 4735 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:50 crc kubenswrapper[4735]: E0128 09:42:50.313200 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 09:42:54.313186464 +0000 UTC m=+27.462850837 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:50 crc kubenswrapper[4735]: E0128 09:42:50.313228 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 09:42:54.313217665 +0000 UTC m=+27.462882038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:50 crc kubenswrapper[4735]: E0128 09:42:50.313269 4735 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 09:42:50 crc kubenswrapper[4735]: E0128 09:42:50.313287 4735 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 09:42:50 crc kubenswrapper[4735]: E0128 09:42:50.313309 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 09:42:54.313298647 +0000 UTC m=+27.462963020 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 09:42:50 crc kubenswrapper[4735]: E0128 09:42:50.313468 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 09:42:54.313429721 +0000 UTC m=+27.463094284 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.502735 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.502799 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:42:50 crc kubenswrapper[4735]: E0128 09:42:50.503008 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:42:50 crc kubenswrapper[4735]: E0128 09:42:50.503128 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.502799 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:42:50 crc kubenswrapper[4735]: E0128 09:42:50.503254 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.622955 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:55:31.801077719 +0000 UTC Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.677471 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f"} Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.698235 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:50Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.722293 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:50Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.739173 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:50Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.755526 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:50Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.770329 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:50Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.788553 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:50Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.802920 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:50Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:50 crc kubenswrapper[4735]: I0128 09:42:50.817955 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:50Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:51 crc kubenswrapper[4735]: I0128 09:42:51.623451 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 17:06:17.240202506 +0000 UTC Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.201115 4735 csr.go:261] certificate signing request csr-2wtdf is approved, waiting to be issued Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.216299 4735 csr.go:257] certificate signing request csr-2wtdf is issued Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.502031 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.502151 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.502224 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:42:52 crc kubenswrapper[4735]: E0128 09:42:52.502252 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:42:52 crc kubenswrapper[4735]: E0128 09:42:52.502375 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:42:52 crc kubenswrapper[4735]: E0128 09:42:52.502488 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.624511 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:40:42.853964456 +0000 UTC Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.662962 4735 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.664402 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.664443 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.664453 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.664515 4735 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.674657 4735 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.674791 4735 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.676007 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.676051 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.676063 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.676082 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.676094 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:52Z","lastTransitionTime":"2026-01-28T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:52 crc kubenswrapper[4735]: E0128 09:42:52.698349 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:52Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.701790 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.701829 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.701842 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.701860 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.701872 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:52Z","lastTransitionTime":"2026-01-28T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:52 crc kubenswrapper[4735]: E0128 09:42:52.718293 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:52Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.721882 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.721946 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.722119 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.722152 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.722432 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:52Z","lastTransitionTime":"2026-01-28T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:52 crc kubenswrapper[4735]: E0128 09:42:52.744061 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:52Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.752080 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.752140 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.752150 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.752170 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.752183 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:52Z","lastTransitionTime":"2026-01-28T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:52 crc kubenswrapper[4735]: E0128 09:42:52.765327 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:52Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.769356 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.769434 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.769451 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.769477 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.769494 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:52Z","lastTransitionTime":"2026-01-28T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:52 crc kubenswrapper[4735]: E0128 09:42:52.789209 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:52Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:52 crc kubenswrapper[4735]: E0128 09:42:52.789347 4735 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.791474 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.791520 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.791536 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.791553 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.791564 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:52Z","lastTransitionTime":"2026-01-28T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.893472 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.893527 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.893540 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.893561 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.893576 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:52Z","lastTransitionTime":"2026-01-28T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.996015 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.996053 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.996064 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.996078 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:52 crc kubenswrapper[4735]: I0128 09:42:52.996087 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:52Z","lastTransitionTime":"2026-01-28T09:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.003063 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-nk6gd"] Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.003649 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.005957 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.006168 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.006323 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.007090 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.007704 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-jxbwx"] Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.008125 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.008693 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.010610 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tjk5s"] Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.010869 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-l7kzt"] Jan 28 09:42:53 crc kubenswrapper[4735]: W0128 09:42:53.011029 4735 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: secrets "default-dockercfg-2q5b6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 28 09:42:53 crc kubenswrapper[4735]: E0128 09:42:53.011077 4735 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-2q5b6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.011160 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:42:53 crc kubenswrapper[4735]: W0128 09:42:53.011200 4735 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: configmaps "multus-daemon-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 28 09:42:53 crc kubenswrapper[4735]: E0128 09:42:53.011217 4735 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"multus-daemon-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.011324 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tjk5s" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.013955 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.014321 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.014505 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.014802 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.015013 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.016112 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.017583 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.019027 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.028991 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.036811 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzrn6\" (UniqueName: \"kubernetes.io/projected/2f975eb5-cf83-4145-8d73-879cc67863cf-kube-api-access-lzrn6\") pod \"machine-config-daemon-l7kzt\" (UID: \"2f975eb5-cf83-4145-8d73-879cc67863cf\") " pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.036860 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-system-cni-dir\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.036887 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-multus-socket-dir-parent\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.036943 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-run-netns\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037027 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f975eb5-cf83-4145-8d73-879cc67863cf-mcd-auth-proxy-config\") pod \"machine-config-daemon-l7kzt\" (UID: \"2f975eb5-cf83-4145-8d73-879cc67863cf\") " pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037061 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-var-lib-cni-bin\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037083 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c50beb21-41ab-474f-90d2-ba27152379cf-multus-daemon-config\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037105 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-run-multus-certs\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037126 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2f975eb5-cf83-4145-8d73-879cc67863cf-rootfs\") pod \"machine-config-daemon-l7kzt\" (UID: \"2f975eb5-cf83-4145-8d73-879cc67863cf\") " pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037147 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/61bae89b-69e8-478b-83d8-b493d0e4d1bf-cni-binary-copy\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037170 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-var-lib-kubelet\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037215 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/61bae89b-69e8-478b-83d8-b493d0e4d1bf-cnibin\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037232 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f46qx\" (UniqueName: \"kubernetes.io/projected/61bae89b-69e8-478b-83d8-b493d0e4d1bf-kube-api-access-f46qx\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037249 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-multus-conf-dir\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037281 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61bae89b-69e8-478b-83d8-b493d0e4d1bf-system-cni-dir\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037298 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/61bae89b-69e8-478b-83d8-b493d0e4d1bf-os-release\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037323 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/61bae89b-69e8-478b-83d8-b493d0e4d1bf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037344 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-os-release\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037361 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c50beb21-41ab-474f-90d2-ba27152379cf-cni-binary-copy\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037378 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-run-k8s-cni-cncf-io\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037392 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-var-lib-cni-multus\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037411 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f975eb5-cf83-4145-8d73-879cc67863cf-proxy-tls\") pod \"machine-config-daemon-l7kzt\" (UID: \"2f975eb5-cf83-4145-8d73-879cc67863cf\") " pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037425 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-cnibin\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037442 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-multus-cni-dir\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037456 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-etc-kubernetes\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037470 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qsbb\" (UniqueName: \"kubernetes.io/projected/c50beb21-41ab-474f-90d2-ba27152379cf-kube-api-access-7qsbb\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037487 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxw8d\" (UniqueName: \"kubernetes.io/projected/d9476ba6-f6b7-4fef-8276-dea1f45b23f3-kube-api-access-rxw8d\") pod \"node-resolver-tjk5s\" (UID: \"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\") " pod="openshift-dns/node-resolver-tjk5s" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037505 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/61bae89b-69e8-478b-83d8-b493d0e4d1bf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037520 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-hostroot\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.037535 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d9476ba6-f6b7-4fef-8276-dea1f45b23f3-hosts-file\") pod \"node-resolver-tjk5s\" (UID: \"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\") " pod="openshift-dns/node-resolver-tjk5s" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.045668 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.060261 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.072912 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.086029 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.099028 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.099067 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.099081 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.099098 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.099111 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:53Z","lastTransitionTime":"2026-01-28T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.106036 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.122267 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.133837 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138270 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2f975eb5-cf83-4145-8d73-879cc67863cf-rootfs\") pod \"machine-config-daemon-l7kzt\" (UID: \"2f975eb5-cf83-4145-8d73-879cc67863cf\") " pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138306 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/61bae89b-69e8-478b-83d8-b493d0e4d1bf-cni-binary-copy\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138328 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-var-lib-kubelet\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138360 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/61bae89b-69e8-478b-83d8-b493d0e4d1bf-cnibin\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138379 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f46qx\" (UniqueName: \"kubernetes.io/projected/61bae89b-69e8-478b-83d8-b493d0e4d1bf-kube-api-access-f46qx\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138397 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-multus-conf-dir\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138454 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61bae89b-69e8-478b-83d8-b493d0e4d1bf-system-cni-dir\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138474 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/61bae89b-69e8-478b-83d8-b493d0e4d1bf-os-release\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138563 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/61bae89b-69e8-478b-83d8-b493d0e4d1bf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138579 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/61bae89b-69e8-478b-83d8-b493d0e4d1bf-os-release\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138531 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-var-lib-kubelet\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138583 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-os-release\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138530 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-multus-conf-dir\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138505 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/61bae89b-69e8-478b-83d8-b493d0e4d1bf-cnibin\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138667 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c50beb21-41ab-474f-90d2-ba27152379cf-cni-binary-copy\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138660 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61bae89b-69e8-478b-83d8-b493d0e4d1bf-system-cni-dir\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138708 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-run-k8s-cni-cncf-io\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138688 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-run-k8s-cni-cncf-io\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138734 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-var-lib-cni-multus\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138773 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f975eb5-cf83-4145-8d73-879cc67863cf-proxy-tls\") pod \"machine-config-daemon-l7kzt\" (UID: \"2f975eb5-cf83-4145-8d73-879cc67863cf\") " pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138627 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-os-release\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138786 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-var-lib-cni-multus\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138792 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-cnibin\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138822 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-multus-cni-dir\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138828 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-cnibin\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138839 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-etc-kubernetes\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138856 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qsbb\" (UniqueName: \"kubernetes.io/projected/c50beb21-41ab-474f-90d2-ba27152379cf-kube-api-access-7qsbb\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138861 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-etc-kubernetes\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138875 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxw8d\" (UniqueName: \"kubernetes.io/projected/d9476ba6-f6b7-4fef-8276-dea1f45b23f3-kube-api-access-rxw8d\") pod \"node-resolver-tjk5s\" (UID: \"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\") " pod="openshift-dns/node-resolver-tjk5s" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138893 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/61bae89b-69e8-478b-83d8-b493d0e4d1bf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138898 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-multus-cni-dir\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138930 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-hostroot\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138910 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-hostroot\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138957 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d9476ba6-f6b7-4fef-8276-dea1f45b23f3-hosts-file\") pod \"node-resolver-tjk5s\" (UID: \"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\") " pod="openshift-dns/node-resolver-tjk5s" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.138983 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzrn6\" (UniqueName: \"kubernetes.io/projected/2f975eb5-cf83-4145-8d73-879cc67863cf-kube-api-access-lzrn6\") pod \"machine-config-daemon-l7kzt\" (UID: \"2f975eb5-cf83-4145-8d73-879cc67863cf\") " pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139002 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-system-cni-dir\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139012 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d9476ba6-f6b7-4fef-8276-dea1f45b23f3-hosts-file\") pod \"node-resolver-tjk5s\" (UID: \"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\") " pod="openshift-dns/node-resolver-tjk5s" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139018 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-multus-socket-dir-parent\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139035 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-run-netns\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139053 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f975eb5-cf83-4145-8d73-879cc67863cf-mcd-auth-proxy-config\") pod \"machine-config-daemon-l7kzt\" (UID: \"2f975eb5-cf83-4145-8d73-879cc67863cf\") " pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139067 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-var-lib-cni-bin\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139082 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c50beb21-41ab-474f-90d2-ba27152379cf-multus-daemon-config\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139093 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/61bae89b-69e8-478b-83d8-b493d0e4d1bf-cni-binary-copy\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139115 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-run-multus-certs\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139100 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-run-netns\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139098 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-run-multus-certs\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139135 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-host-var-lib-cni-bin\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139189 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-system-cni-dir\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139225 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c50beb21-41ab-474f-90d2-ba27152379cf-multus-socket-dir-parent\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139301 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/61bae89b-69e8-478b-83d8-b493d0e4d1bf-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139634 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/61bae89b-69e8-478b-83d8-b493d0e4d1bf-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139658 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f975eb5-cf83-4145-8d73-879cc67863cf-mcd-auth-proxy-config\") pod \"machine-config-daemon-l7kzt\" (UID: \"2f975eb5-cf83-4145-8d73-879cc67863cf\") " pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139716 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2f975eb5-cf83-4145-8d73-879cc67863cf-rootfs\") pod \"machine-config-daemon-l7kzt\" (UID: \"2f975eb5-cf83-4145-8d73-879cc67863cf\") " pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.139832 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c50beb21-41ab-474f-90d2-ba27152379cf-cni-binary-copy\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.145975 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.146478 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f975eb5-cf83-4145-8d73-879cc67863cf-proxy-tls\") pod \"machine-config-daemon-l7kzt\" (UID: \"2f975eb5-cf83-4145-8d73-879cc67863cf\") " pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.156310 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f46qx\" (UniqueName: \"kubernetes.io/projected/61bae89b-69e8-478b-83d8-b493d0e4d1bf-kube-api-access-f46qx\") pod \"multus-additional-cni-plugins-nk6gd\" (UID: \"61bae89b-69e8-478b-83d8-b493d0e4d1bf\") " pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.157673 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qsbb\" (UniqueName: \"kubernetes.io/projected/c50beb21-41ab-474f-90d2-ba27152379cf-kube-api-access-7qsbb\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.160469 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzrn6\" (UniqueName: \"kubernetes.io/projected/2f975eb5-cf83-4145-8d73-879cc67863cf-kube-api-access-lzrn6\") pod \"machine-config-daemon-l7kzt\" (UID: \"2f975eb5-cf83-4145-8d73-879cc67863cf\") " pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.161791 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxw8d\" (UniqueName: \"kubernetes.io/projected/d9476ba6-f6b7-4fef-8276-dea1f45b23f3-kube-api-access-rxw8d\") pod \"node-resolver-tjk5s\" (UID: \"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\") " pod="openshift-dns/node-resolver-tjk5s" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.169787 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.181910 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.194189 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.201263 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.201300 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.201309 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.201324 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.201334 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:53Z","lastTransitionTime":"2026-01-28T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.206962 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.217394 4735 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-28 09:37:52 +0000 UTC, rotation deadline is 2026-12-20 11:39:00.231909909 +0000 UTC Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.217479 4735 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7825h56m7.014434495s for next certificate rotation Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.224880 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.241061 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.253944 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.267364 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.278710 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.292019 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.303398 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.303451 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.303463 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.303480 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.303492 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:53Z","lastTransitionTime":"2026-01-28T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.304042 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.315035 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.317866 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.327912 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.339325 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tjk5s" Jan 28 09:42:53 crc kubenswrapper[4735]: W0128 09:42:53.344358 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f975eb5_cf83_4145_8d73_879cc67863cf.slice/crio-8039ce6a5c0a9458215e94220c518b019f0ba2df8b8fe0880d54ed5d67fe50d4 WatchSource:0}: Error finding container 8039ce6a5c0a9458215e94220c518b019f0ba2df8b8fe0880d54ed5d67fe50d4: Status 404 returned error can't find the container with id 8039ce6a5c0a9458215e94220c518b019f0ba2df8b8fe0880d54ed5d67fe50d4 Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.384619 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6fwtd"] Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.385478 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.393771 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.394279 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.394560 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.394709 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.394793 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.395177 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.395978 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.406167 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.409858 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.409887 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.409896 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.409910 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.409918 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:53Z","lastTransitionTime":"2026-01-28T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.420357 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.432053 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.442418 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-var-lib-openvswitch\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444225 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-etc-openvswitch\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444261 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-cni-bin\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444289 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a0aebf3-b579-4254-9f8c-7163984836f9-ovn-node-metrics-cert\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444324 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-cni-netd\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444382 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-run-netns\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444417 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-run-ovn-kubernetes\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444437 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-ovnkube-config\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444499 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-node-log\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444546 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-ovn\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444562 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-log-socket\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444578 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-env-overrides\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444616 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-systemd-units\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444632 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444650 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-slash\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444673 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-systemd\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444727 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-openvswitch\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444766 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-ovnkube-script-lib\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444799 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-kubelet\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.444824 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnrct\" (UniqueName: \"kubernetes.io/projected/9a0aebf3-b579-4254-9f8c-7163984836f9-kube-api-access-lnrct\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.451625 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.472090 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.484980 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.497027 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.509665 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.514121 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.514164 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.514195 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.514211 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.514222 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:53Z","lastTransitionTime":"2026-01-28T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.524035 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.541324 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546212 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnrct\" (UniqueName: \"kubernetes.io/projected/9a0aebf3-b579-4254-9f8c-7163984836f9-kube-api-access-lnrct\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546262 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-cni-bin\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546479 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-var-lib-openvswitch\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546507 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-etc-openvswitch\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546541 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a0aebf3-b579-4254-9f8c-7163984836f9-ovn-node-metrics-cert\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546568 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-cni-netd\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546590 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-run-netns\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546617 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-run-ovn-kubernetes\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546640 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-ovnkube-config\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546721 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-node-log\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546772 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-ovn\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546798 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-log-socket\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546822 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-systemd-units\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546847 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546873 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-env-overrides\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546919 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-slash\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546943 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-systemd\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546966 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-openvswitch\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.546992 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-ovnkube-script-lib\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.547021 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-kubelet\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.547096 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-kubelet\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.547401 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-cni-bin\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.547451 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-slash\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.547491 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-systemd\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.547528 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-openvswitch\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.547588 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-systemd-units\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.547665 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-log-socket\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.547712 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-cni-netd\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.547786 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-var-lib-openvswitch\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.547782 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.547851 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-etc-openvswitch\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.548044 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-node-log\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.548105 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-env-overrides\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.548138 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-run-netns\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.548052 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-run-ovn-kubernetes\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.548316 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-ovnkube-script-lib\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.548442 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-ovnkube-config\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.548567 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-ovn\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.553996 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a0aebf3-b579-4254-9f8c-7163984836f9-ovn-node-metrics-cert\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.558125 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.569509 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnrct\" (UniqueName: \"kubernetes.io/projected/9a0aebf3-b579-4254-9f8c-7163984836f9-kube-api-access-lnrct\") pod \"ovnkube-node-6fwtd\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.575700 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.590242 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.617001 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.617036 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.617046 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.617063 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.617073 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:53Z","lastTransitionTime":"2026-01-28T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.625233 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:15:25.061122629 +0000 UTC Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.689618 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d"} Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.689682 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259"} Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.689698 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"8039ce6a5c0a9458215e94220c518b019f0ba2df8b8fe0880d54ed5d67fe50d4"} Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.691659 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tjk5s" event={"ID":"d9476ba6-f6b7-4fef-8276-dea1f45b23f3","Type":"ContainerStarted","Data":"e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6"} Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.691839 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tjk5s" event={"ID":"d9476ba6-f6b7-4fef-8276-dea1f45b23f3","Type":"ContainerStarted","Data":"d0bdaa7d682e45f2b76b91b48006803e3273e66d663debeb4eaf81834f0bea91"} Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.693239 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" event={"ID":"61bae89b-69e8-478b-83d8-b493d0e4d1bf","Type":"ContainerStarted","Data":"fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d"} Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.693348 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" event={"ID":"61bae89b-69e8-478b-83d8-b493d0e4d1bf","Type":"ContainerStarted","Data":"fe3020f46a4f68aaad3b3ab10a2c27d172aa2089117909571165dd9c38b88dfa"} Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.704934 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.720200 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.720678 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.721094 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.721188 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.721259 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.721456 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:53Z","lastTransitionTime":"2026-01-28T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.722934 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.738199 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.758864 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.769215 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.785918 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.798595 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.810031 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.823804 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.823877 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.823894 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.823944 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.823965 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:53Z","lastTransitionTime":"2026-01-28T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.833011 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.848926 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.860513 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.876358 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.891298 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.906969 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.922005 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.926694 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.926727 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.926738 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.926821 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.926833 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:53Z","lastTransitionTime":"2026-01-28T09:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.935672 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.948028 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.962715 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.978354 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:53 crc kubenswrapper[4735]: I0128 09:42:53.992185 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:53Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.003614 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.014328 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.029430 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.029728 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.029769 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.029779 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.029793 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.029803 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:54Z","lastTransitionTime":"2026-01-28T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.040858 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.062996 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.075840 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.126195 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.132362 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.132396 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.132407 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.132424 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.132434 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:54Z","lastTransitionTime":"2026-01-28T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.140128 4735 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.140215 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c50beb21-41ab-474f-90d2-ba27152379cf-multus-daemon-config podName:c50beb21-41ab-474f-90d2-ba27152379cf nodeName:}" failed. No retries permitted until 2026-01-28 09:42:54.640193567 +0000 UTC m=+27.789857940 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/c50beb21-41ab-474f-90d2-ba27152379cf-multus-daemon-config") pod "multus-jxbwx" (UID: "c50beb21-41ab-474f-90d2-ba27152379cf") : failed to sync configmap cache: timed out waiting for the condition Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.234541 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.234584 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.234597 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.234611 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.234623 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:54Z","lastTransitionTime":"2026-01-28T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.237374 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.253468 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.253696 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:43:02.253667585 +0000 UTC m=+35.403331958 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.336877 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.336956 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.336971 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.336998 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.337015 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:54Z","lastTransitionTime":"2026-01-28T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.354422 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.354468 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.354489 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.354507 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.354638 4735 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.354649 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.354666 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.354693 4735 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.354697 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:02.354683597 +0000 UTC m=+35.504347970 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.354703 4735 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.354725 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:02.354715978 +0000 UTC m=+35.504380341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.354919 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:02.354907694 +0000 UTC m=+35.504572057 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.354728 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.354971 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.354990 4735 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.355074 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:02.355051117 +0000 UTC m=+35.504715480 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.439731 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.439811 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.439821 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.439842 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.439852 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:54Z","lastTransitionTime":"2026-01-28T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.502422 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.502508 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.502562 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.502583 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.502734 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:42:54 crc kubenswrapper[4735]: E0128 09:42:54.502887 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.542756 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.542818 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.542828 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.542843 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.542853 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:54Z","lastTransitionTime":"2026-01-28T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.626081 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 15:38:44.520658294 +0000 UTC Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.645041 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.645079 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.645088 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.645102 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.645112 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:54Z","lastTransitionTime":"2026-01-28T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.657586 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c50beb21-41ab-474f-90d2-ba27152379cf-multus-daemon-config\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.658510 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c50beb21-41ab-474f-90d2-ba27152379cf-multus-daemon-config\") pod \"multus-jxbwx\" (UID: \"c50beb21-41ab-474f-90d2-ba27152379cf\") " pod="openshift-multus/multus-jxbwx" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.698162 4735 generic.go:334] "Generic (PLEG): container finished" podID="61bae89b-69e8-478b-83d8-b493d0e4d1bf" containerID="fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d" exitCode=0 Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.698244 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" event={"ID":"61bae89b-69e8-478b-83d8-b493d0e4d1bf","Type":"ContainerDied","Data":"fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d"} Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.701090 4735 generic.go:334] "Generic (PLEG): container finished" podID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerID="55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb" exitCode=0 Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.701162 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerDied","Data":"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb"} Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.701235 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerStarted","Data":"f29c1a4afbdb817451750963a72f2b4574c81de917f95e2370193882fa3372f8"} Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.712869 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.725458 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.739582 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.747830 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.747850 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.747859 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.747871 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.747881 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:54Z","lastTransitionTime":"2026-01-28T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.754441 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.769121 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.782899 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.796355 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.812179 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.821759 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jxbwx" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.828923 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: W0128 09:42:54.840149 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc50beb21_41ab_474f_90d2_ba27152379cf.slice/crio-779147040a3238c64f07fc5c3b52727c3b6a6770d6792c888962b18dbf69cdb5 WatchSource:0}: Error finding container 779147040a3238c64f07fc5c3b52727c3b6a6770d6792c888962b18dbf69cdb5: Status 404 returned error can't find the container with id 779147040a3238c64f07fc5c3b52727c3b6a6770d6792c888962b18dbf69cdb5 Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.847489 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.851695 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.851737 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.851766 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.851783 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.851794 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:54Z","lastTransitionTime":"2026-01-28T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.860588 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.873871 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.886617 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.898029 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.907510 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.923065 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.934810 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.944393 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.954510 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.956132 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.956171 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.956184 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.956202 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.956215 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:54Z","lastTransitionTime":"2026-01-28T09:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.965880 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.979889 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:54 crc kubenswrapper[4735]: I0128 09:42:54.993315 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:54Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.005248 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.017442 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.029686 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.042677 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.058660 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.058688 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.058696 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.058709 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.058719 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:55Z","lastTransitionTime":"2026-01-28T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.161472 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.162026 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.162039 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.162060 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.162074 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:55Z","lastTransitionTime":"2026-01-28T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.278426 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.278502 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.278512 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.278527 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.278537 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:55Z","lastTransitionTime":"2026-01-28T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.382379 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.382763 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.382780 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.382799 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.382810 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:55Z","lastTransitionTime":"2026-01-28T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.485311 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.485356 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.485367 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.485382 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.485393 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:55Z","lastTransitionTime":"2026-01-28T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.587367 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.587410 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.587423 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.587438 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.587451 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:55Z","lastTransitionTime":"2026-01-28T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.626760 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 16:47:19.854739054 +0000 UTC Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.689830 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.689861 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.689870 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.689883 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.689894 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:55Z","lastTransitionTime":"2026-01-28T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.706432 4735 generic.go:334] "Generic (PLEG): container finished" podID="61bae89b-69e8-478b-83d8-b493d0e4d1bf" containerID="7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2" exitCode=0 Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.706459 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" event={"ID":"61bae89b-69e8-478b-83d8-b493d0e4d1bf","Type":"ContainerDied","Data":"7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.711648 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerStarted","Data":"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.711691 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerStarted","Data":"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.711705 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerStarted","Data":"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.711715 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerStarted","Data":"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.711729 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerStarted","Data":"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.711739 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerStarted","Data":"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.714836 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jxbwx" event={"ID":"c50beb21-41ab-474f-90d2-ba27152379cf","Type":"ContainerStarted","Data":"3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.714893 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jxbwx" event={"ID":"c50beb21-41ab-474f-90d2-ba27152379cf","Type":"ContainerStarted","Data":"779147040a3238c64f07fc5c3b52727c3b6a6770d6792c888962b18dbf69cdb5"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.719605 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.732641 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.750979 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.763509 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.773109 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.792174 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.792218 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.792231 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.792246 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.792260 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:55Z","lastTransitionTime":"2026-01-28T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.793700 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.804872 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.819294 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.832028 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.845527 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.856986 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.870320 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.884188 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.894517 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.894557 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.894567 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.894584 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.894594 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:55Z","lastTransitionTime":"2026-01-28T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.902280 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.915689 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.933523 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.948002 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.963085 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.977248 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.996851 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.996891 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.996901 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.996915 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:55 crc kubenswrapper[4735]: I0128 09:42:55.996926 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:55Z","lastTransitionTime":"2026-01-28T09:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.009037 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.021161 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.034558 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.057469 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.072199 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.086909 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.099783 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.099828 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.099843 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.099862 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.099877 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:56Z","lastTransitionTime":"2026-01-28T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.102246 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.202693 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.202771 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.202786 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.202814 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.202827 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:56Z","lastTransitionTime":"2026-01-28T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.304807 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.304849 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.304859 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.304876 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.304888 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:56Z","lastTransitionTime":"2026-01-28T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.394250 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-dqhmn"] Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.394668 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dqhmn" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.396580 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.397006 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.397137 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.397174 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.407790 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.407841 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.407853 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.407871 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.407885 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:56Z","lastTransitionTime":"2026-01-28T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.412116 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.423286 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.436597 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.451671 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.467578 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.479637 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/616fb0bb-a83f-4c11-bf69-941e43156a1e-host\") pod \"node-ca-dqhmn\" (UID: \"616fb0bb-a83f-4c11-bf69-941e43156a1e\") " pod="openshift-image-registry/node-ca-dqhmn" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.479677 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/616fb0bb-a83f-4c11-bf69-941e43156a1e-serviceca\") pod \"node-ca-dqhmn\" (UID: \"616fb0bb-a83f-4c11-bf69-941e43156a1e\") " pod="openshift-image-registry/node-ca-dqhmn" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.479693 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsjf8\" (UniqueName: \"kubernetes.io/projected/616fb0bb-a83f-4c11-bf69-941e43156a1e-kube-api-access-jsjf8\") pod \"node-ca-dqhmn\" (UID: \"616fb0bb-a83f-4c11-bf69-941e43156a1e\") " pod="openshift-image-registry/node-ca-dqhmn" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.481947 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.501943 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:42:56 crc kubenswrapper[4735]: E0128 09:42:56.502068 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.502340 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.502360 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:42:56 crc kubenswrapper[4735]: E0128 09:42:56.502404 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:42:56 crc kubenswrapper[4735]: E0128 09:42:56.502446 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.510190 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.510237 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.510248 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.510264 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.510274 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:56Z","lastTransitionTime":"2026-01-28T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.514026 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.526283 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.537819 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.563484 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.580321 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/616fb0bb-a83f-4c11-bf69-941e43156a1e-host\") pod \"node-ca-dqhmn\" (UID: \"616fb0bb-a83f-4c11-bf69-941e43156a1e\") " pod="openshift-image-registry/node-ca-dqhmn" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.580356 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/616fb0bb-a83f-4c11-bf69-941e43156a1e-serviceca\") pod \"node-ca-dqhmn\" (UID: \"616fb0bb-a83f-4c11-bf69-941e43156a1e\") " pod="openshift-image-registry/node-ca-dqhmn" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.580374 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsjf8\" (UniqueName: \"kubernetes.io/projected/616fb0bb-a83f-4c11-bf69-941e43156a1e-kube-api-access-jsjf8\") pod \"node-ca-dqhmn\" (UID: \"616fb0bb-a83f-4c11-bf69-941e43156a1e\") " pod="openshift-image-registry/node-ca-dqhmn" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.580432 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/616fb0bb-a83f-4c11-bf69-941e43156a1e-host\") pod \"node-ca-dqhmn\" (UID: \"616fb0bb-a83f-4c11-bf69-941e43156a1e\") " pod="openshift-image-registry/node-ca-dqhmn" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.581761 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/616fb0bb-a83f-4c11-bf69-941e43156a1e-serviceca\") pod \"node-ca-dqhmn\" (UID: \"616fb0bb-a83f-4c11-bf69-941e43156a1e\") " pod="openshift-image-registry/node-ca-dqhmn" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.601920 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.613103 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.613174 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.613188 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.613211 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.613224 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:56Z","lastTransitionTime":"2026-01-28T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.627363 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 22:53:59.527896401 +0000 UTC Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.628490 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsjf8\" (UniqueName: \"kubernetes.io/projected/616fb0bb-a83f-4c11-bf69-941e43156a1e-kube-api-access-jsjf8\") pod \"node-ca-dqhmn\" (UID: \"616fb0bb-a83f-4c11-bf69-941e43156a1e\") " pod="openshift-image-registry/node-ca-dqhmn" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.665225 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.695908 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.711508 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dqhmn" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.715083 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.715151 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.715166 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.715213 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.715307 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:56Z","lastTransitionTime":"2026-01-28T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.719167 4735 generic.go:334] "Generic (PLEG): container finished" podID="61bae89b-69e8-478b-83d8-b493d0e4d1bf" containerID="258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad" exitCode=0 Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.719212 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" event={"ID":"61bae89b-69e8-478b-83d8-b493d0e4d1bf","Type":"ContainerDied","Data":"258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad"} Jan 28 09:42:56 crc kubenswrapper[4735]: W0128 09:42:56.727086 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod616fb0bb_a83f_4c11_bf69_941e43156a1e.slice/crio-7ce19c0c48f8967507e4fea9712980d31f9320d7d826de56289aa30c63f263e1 WatchSource:0}: Error finding container 7ce19c0c48f8967507e4fea9712980d31f9320d7d826de56289aa30c63f263e1: Status 404 returned error can't find the container with id 7ce19c0c48f8967507e4fea9712980d31f9320d7d826de56289aa30c63f263e1 Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.737262 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.781556 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.814877 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.818708 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.818774 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.818787 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.818803 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.818813 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:56Z","lastTransitionTime":"2026-01-28T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.853698 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.894310 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.922026 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.922083 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.922094 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.922122 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.922161 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:56Z","lastTransitionTime":"2026-01-28T09:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.937194 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:56 crc kubenswrapper[4735]: I0128 09:42:56.978649 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:56Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.014414 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.025436 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.025470 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.025482 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.025498 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.025509 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:57Z","lastTransitionTime":"2026-01-28T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.054884 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.100606 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.127956 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.128023 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.128037 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.128071 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.128086 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:57Z","lastTransitionTime":"2026-01-28T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.135302 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.175607 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.216148 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.230716 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.230797 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.230811 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.230830 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.230840 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:57Z","lastTransitionTime":"2026-01-28T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.255733 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.294949 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.297016 4735 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 28 09:42:57 crc kubenswrapper[4735]: W0128 09:42:57.297691 4735 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 28 09:42:57 crc kubenswrapper[4735]: W0128 09:42:57.297689 4735 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 28 09:42:57 crc kubenswrapper[4735]: W0128 09:42:57.297709 4735 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 09:42:57 crc kubenswrapper[4735]: W0128 09:42:57.297919 4735 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.333651 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.333689 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.333697 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.333715 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.333725 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:57Z","lastTransitionTime":"2026-01-28T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.437038 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.437098 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.437113 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.437134 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.437152 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:57Z","lastTransitionTime":"2026-01-28T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.518444 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.536873 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.540151 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.540191 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.540210 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.540238 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.540255 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:57Z","lastTransitionTime":"2026-01-28T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.557293 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.575842 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.592084 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.627508 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:13:22.097687377 +0000 UTC Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.635261 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.643283 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.643334 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.643346 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.643365 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.643380 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:57Z","lastTransitionTime":"2026-01-28T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.655346 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.675609 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.689433 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.707143 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.726293 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dqhmn" event={"ID":"616fb0bb-a83f-4c11-bf69-941e43156a1e","Type":"ContainerStarted","Data":"cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11"} Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.726371 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dqhmn" event={"ID":"616fb0bb-a83f-4c11-bf69-941e43156a1e","Type":"ContainerStarted","Data":"7ce19c0c48f8967507e4fea9712980d31f9320d7d826de56289aa30c63f263e1"} Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.729908 4735 generic.go:334] "Generic (PLEG): container finished" podID="61bae89b-69e8-478b-83d8-b493d0e4d1bf" containerID="5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf" exitCode=0 Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.729981 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" event={"ID":"61bae89b-69e8-478b-83d8-b493d0e4d1bf","Type":"ContainerDied","Data":"5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf"} Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.740731 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.746175 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.746242 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.746259 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.746283 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.746298 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:57Z","lastTransitionTime":"2026-01-28T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.777118 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.815815 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.850054 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.850130 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.850153 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.850175 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.850190 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:57Z","lastTransitionTime":"2026-01-28T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.860961 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.895596 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.935486 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.953307 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.953496 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.953627 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.953789 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.953942 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:57Z","lastTransitionTime":"2026-01-28T09:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:57 crc kubenswrapper[4735]: I0128 09:42:57.973792 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.021566 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.055502 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.056823 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.057024 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.057152 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.057303 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.057460 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:58Z","lastTransitionTime":"2026-01-28T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.098595 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.139611 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.161144 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.161211 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.161222 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.161243 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.161257 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:58Z","lastTransitionTime":"2026-01-28T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.183155 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.223835 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.254905 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.264341 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.264413 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.264434 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.264462 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.264482 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:58Z","lastTransitionTime":"2026-01-28T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.299843 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.307256 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.361132 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.370457 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.370533 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.370575 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.370598 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.370611 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:58Z","lastTransitionTime":"2026-01-28T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.400393 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.434039 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.474269 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.474343 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.474362 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.474387 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.474408 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:58Z","lastTransitionTime":"2026-01-28T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.502381 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.502431 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.502381 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:42:58 crc kubenswrapper[4735]: E0128 09:42:58.502570 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:42:58 crc kubenswrapper[4735]: E0128 09:42:58.502759 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:42:58 crc kubenswrapper[4735]: E0128 09:42:58.502868 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.576716 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.576806 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.576824 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.576848 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.576866 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:58Z","lastTransitionTime":"2026-01-28T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.628363 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:38:47.025586795 +0000 UTC Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.636643 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.649843 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.665642 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.679073 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.679129 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.679161 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.679185 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.679204 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:58Z","lastTransitionTime":"2026-01-28T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.684068 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.698031 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.717428 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.730818 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.737113 4735 generic.go:334] "Generic (PLEG): container finished" podID="61bae89b-69e8-478b-83d8-b493d0e4d1bf" containerID="759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7" exitCode=0 Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.737200 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" event={"ID":"61bae89b-69e8-478b-83d8-b493d0e4d1bf","Type":"ContainerDied","Data":"759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7"} Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.740742 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.743442 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerStarted","Data":"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa"} Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.746934 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.766835 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.781974 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.782016 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.782025 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.782040 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.782053 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:58Z","lastTransitionTime":"2026-01-28T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.797965 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.805924 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.853175 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.895172 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.895214 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.895226 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.895245 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.895258 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:58Z","lastTransitionTime":"2026-01-28T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.899245 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.932958 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.973480 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:58Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.997589 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.997635 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.997649 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.997668 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:58 crc kubenswrapper[4735]: I0128 09:42:58.997678 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:58Z","lastTransitionTime":"2026-01-28T09:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.013560 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.053495 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.093275 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.100933 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.101002 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.101017 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.101043 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.101058 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:59Z","lastTransitionTime":"2026-01-28T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.142045 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.176227 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.205354 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.205424 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.205444 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.205471 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.205489 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:59Z","lastTransitionTime":"2026-01-28T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.216870 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.261001 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.298721 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.308487 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.308569 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.308596 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.308648 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.308674 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:59Z","lastTransitionTime":"2026-01-28T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.342271 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.380997 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.412290 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.412337 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.412353 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.412372 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.412387 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:59Z","lastTransitionTime":"2026-01-28T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.417313 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.458177 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.501909 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.515365 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.515426 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.515446 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.515476 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.515494 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:59Z","lastTransitionTime":"2026-01-28T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.537974 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.575543 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.619248 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.619305 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.619321 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.619344 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.619362 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:59Z","lastTransitionTime":"2026-01-28T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.627508 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.629595 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:57:12.774199656 +0000 UTC Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.723164 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.723213 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.723225 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.723243 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.723256 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:59Z","lastTransitionTime":"2026-01-28T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.752187 4735 generic.go:334] "Generic (PLEG): container finished" podID="61bae89b-69e8-478b-83d8-b493d0e4d1bf" containerID="96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd" exitCode=0 Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.752255 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" event={"ID":"61bae89b-69e8-478b-83d8-b493d0e4d1bf","Type":"ContainerDied","Data":"96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd"} Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.784115 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.803649 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.825492 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.827800 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.827839 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.827849 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.827867 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.827880 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:59Z","lastTransitionTime":"2026-01-28T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.843532 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.857840 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.874104 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.897300 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.931539 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.931694 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.931785 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.931864 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.931934 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:42:59Z","lastTransitionTime":"2026-01-28T09:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.942303 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:42:59 crc kubenswrapper[4735]: I0128 09:42:59.974480 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:42:59Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.015268 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.035035 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.035072 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.035084 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.035130 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.035148 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:00Z","lastTransitionTime":"2026-01-28T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.059968 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.097585 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.133957 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.137490 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.137524 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.137536 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.137553 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.137565 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:00Z","lastTransitionTime":"2026-01-28T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.178457 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.240502 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.240568 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.240582 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.240608 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.240625 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:00Z","lastTransitionTime":"2026-01-28T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.344378 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.344489 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.344514 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.344558 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.344587 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:00Z","lastTransitionTime":"2026-01-28T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.447788 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.447838 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.447847 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.447866 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.447879 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:00Z","lastTransitionTime":"2026-01-28T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.501899 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.501936 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.501936 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:00 crc kubenswrapper[4735]: E0128 09:43:00.502093 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:00 crc kubenswrapper[4735]: E0128 09:43:00.502353 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:00 crc kubenswrapper[4735]: E0128 09:43:00.502418 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.549908 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.549949 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.549958 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.549971 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.549982 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:00Z","lastTransitionTime":"2026-01-28T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.629870 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 02:24:58.071992828 +0000 UTC Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.652479 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.652618 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.652637 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.652661 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.652676 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:00Z","lastTransitionTime":"2026-01-28T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.755439 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.755508 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.755536 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.755568 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.755583 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:00Z","lastTransitionTime":"2026-01-28T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.763741 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" event={"ID":"61bae89b-69e8-478b-83d8-b493d0e4d1bf","Type":"ContainerStarted","Data":"3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61"} Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.770195 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerStarted","Data":"42e5dbc0b2c9b5a602a03306b28946d1eb47e7bdf4ae56fb954bb36a5c4f1f72"} Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.771392 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.771499 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.780678 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.801076 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.806073 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.807188 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.814727 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.826617 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.840287 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.862254 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.863203 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.863257 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.863270 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.863292 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.863306 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:00Z","lastTransitionTime":"2026-01-28T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.877762 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.892215 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.905281 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.917765 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.930357 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.947228 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.958136 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.966270 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.966319 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.966330 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.966351 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.966364 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:00Z","lastTransitionTime":"2026-01-28T09:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.972923 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:00 crc kubenswrapper[4735]: I0128 09:43:00.989315 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:00Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.005794 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:01Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.019886 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:01Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.034031 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:01Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.047266 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:01Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.059595 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:01Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.069531 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.069591 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.069601 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.069622 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.069640 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:01Z","lastTransitionTime":"2026-01-28T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.073701 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:01Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.092554 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:01Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.108921 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:01Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.136949 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:01Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.172145 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.172224 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.172255 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.172277 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.172292 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:01Z","lastTransitionTime":"2026-01-28T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.176530 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:01Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.217373 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:01Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.267507 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e5dbc0b2c9b5a602a03306b28946d1eb47e7bdf4ae56fb954bb36a5c4f1f72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:01Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.275854 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.275939 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.275952 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.275974 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.275991 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:01Z","lastTransitionTime":"2026-01-28T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.295450 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:01Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.381277 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.381339 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.381355 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.381384 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.381404 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:01Z","lastTransitionTime":"2026-01-28T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.484043 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.484510 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.484520 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.484536 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.484548 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:01Z","lastTransitionTime":"2026-01-28T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.588726 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.589106 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.589165 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.589230 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.589341 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:01Z","lastTransitionTime":"2026-01-28T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.630728 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:15:06.803991199 +0000 UTC Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.692601 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.692641 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.692651 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.692684 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.692698 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:01Z","lastTransitionTime":"2026-01-28T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.774711 4735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.796516 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.796573 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.796584 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.796602 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.796618 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:01Z","lastTransitionTime":"2026-01-28T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.900551 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.900619 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.900635 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.900723 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:01 crc kubenswrapper[4735]: I0128 09:43:01.900741 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:01Z","lastTransitionTime":"2026-01-28T09:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.003342 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.003410 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.003425 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.003445 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.003460 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:02Z","lastTransitionTime":"2026-01-28T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.106968 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.107027 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.107040 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.107062 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.107079 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:02Z","lastTransitionTime":"2026-01-28T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.209679 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.209725 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.209734 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.209770 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.209781 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:02Z","lastTransitionTime":"2026-01-28T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.255492 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:43:02 crc kubenswrapper[4735]: E0128 09:43:02.256094 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:43:18.256049302 +0000 UTC m=+51.405713675 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.313880 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.314216 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.314282 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.314354 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.314413 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:02Z","lastTransitionTime":"2026-01-28T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.356825 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.357007 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.357065 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.357081 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:02 crc kubenswrapper[4735]: E0128 09:43:02.357271 4735 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 09:43:02 crc kubenswrapper[4735]: E0128 09:43:02.357309 4735 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 09:43:02 crc kubenswrapper[4735]: E0128 09:43:02.357389 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 09:43:02 crc kubenswrapper[4735]: E0128 09:43:02.357433 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:18.357386242 +0000 UTC m=+51.507050655 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 09:43:02 crc kubenswrapper[4735]: E0128 09:43:02.357474 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:18.357455264 +0000 UTC m=+51.507119697 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 09:43:02 crc kubenswrapper[4735]: E0128 09:43:02.357437 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 09:43:02 crc kubenswrapper[4735]: E0128 09:43:02.357517 4735 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:43:02 crc kubenswrapper[4735]: E0128 09:43:02.357593 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:18.357576437 +0000 UTC m=+51.507240910 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:43:02 crc kubenswrapper[4735]: E0128 09:43:02.357909 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 09:43:02 crc kubenswrapper[4735]: E0128 09:43:02.358071 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 09:43:02 crc kubenswrapper[4735]: E0128 09:43:02.358134 4735 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:43:02 crc kubenswrapper[4735]: E0128 09:43:02.358275 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:18.358246126 +0000 UTC m=+51.507910699 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.417639 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.417693 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.417706 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.417728 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.417742 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:02Z","lastTransitionTime":"2026-01-28T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.501964 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:02 crc kubenswrapper[4735]: E0128 09:43:02.502173 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.502885 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:02 crc kubenswrapper[4735]: E0128 09:43:02.503025 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.503121 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:02 crc kubenswrapper[4735]: E0128 09:43:02.503212 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.521316 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.521430 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.521453 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.521523 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.521580 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:02Z","lastTransitionTime":"2026-01-28T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.624599 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.624633 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.624642 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.624656 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.624665 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:02Z","lastTransitionTime":"2026-01-28T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.631940 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 23:29:41.674755331 +0000 UTC Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.727998 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.728043 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.728053 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.728071 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.728083 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:02Z","lastTransitionTime":"2026-01-28T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.778290 4735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.830981 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.831023 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.831033 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.831051 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.831062 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:02Z","lastTransitionTime":"2026-01-28T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.933628 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.933673 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.933682 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.933704 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:02 crc kubenswrapper[4735]: I0128 09:43:02.933718 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:02Z","lastTransitionTime":"2026-01-28T09:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.036879 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.036939 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.036953 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.036974 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.036988 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:03Z","lastTransitionTime":"2026-01-28T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.094873 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.094944 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.094966 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.094984 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.094998 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:03Z","lastTransitionTime":"2026-01-28T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:03 crc kubenswrapper[4735]: E0128 09:43:03.108243 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:03Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.115488 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.115548 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.115560 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.115585 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.115600 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:03Z","lastTransitionTime":"2026-01-28T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:03 crc kubenswrapper[4735]: E0128 09:43:03.132641 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:03Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.137769 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.137815 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.137826 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.137849 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.137862 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:03Z","lastTransitionTime":"2026-01-28T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:03 crc kubenswrapper[4735]: E0128 09:43:03.153894 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:03Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.159081 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.159132 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.159147 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.159172 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.159190 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:03Z","lastTransitionTime":"2026-01-28T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:03 crc kubenswrapper[4735]: E0128 09:43:03.180143 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:03Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.185880 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.185931 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.185945 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.185967 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.185980 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:03Z","lastTransitionTime":"2026-01-28T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:03 crc kubenswrapper[4735]: E0128 09:43:03.205044 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:03Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:03 crc kubenswrapper[4735]: E0128 09:43:03.205220 4735 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.207155 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.207197 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.207212 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.207238 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.207254 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:03Z","lastTransitionTime":"2026-01-28T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.311377 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.311443 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.311458 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.311481 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.311494 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:03Z","lastTransitionTime":"2026-01-28T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.414541 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.414630 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.414644 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.414660 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.414673 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:03Z","lastTransitionTime":"2026-01-28T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.517396 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.517457 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.517468 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.517490 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.517503 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:03Z","lastTransitionTime":"2026-01-28T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.620956 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.621005 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.621017 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.621037 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.621052 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:03Z","lastTransitionTime":"2026-01-28T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.632515 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 02:14:18.330800872 +0000 UTC Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.729633 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.729718 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.729736 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.729794 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.729815 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:03Z","lastTransitionTime":"2026-01-28T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.783567 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovnkube-controller/0.log" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.786130 4735 generic.go:334] "Generic (PLEG): container finished" podID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerID="42e5dbc0b2c9b5a602a03306b28946d1eb47e7bdf4ae56fb954bb36a5c4f1f72" exitCode=1 Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.786211 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerDied","Data":"42e5dbc0b2c9b5a602a03306b28946d1eb47e7bdf4ae56fb954bb36a5c4f1f72"} Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.787492 4735 scope.go:117] "RemoveContainer" containerID="42e5dbc0b2c9b5a602a03306b28946d1eb47e7bdf4ae56fb954bb36a5c4f1f72" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.808514 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:03Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.828184 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:03Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.832174 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.832235 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.832252 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.832277 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.832295 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:03Z","lastTransitionTime":"2026-01-28T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.861720 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e5dbc0b2c9b5a602a03306b28946d1eb47e7bdf4ae56fb954bb36a5c4f1f72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e5dbc0b2c9b5a602a03306b28946d1eb47e7bdf4ae56fb954bb36a5c4f1f72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:03.374854 6075 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 09:43:03.374910 6075 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 09:43:03.374953 6075 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 09:43:03.374962 6075 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 09:43:03.374977 6075 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 09:43:03.374985 6075 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 09:43:03.374980 6075 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 09:43:03.375019 6075 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 09:43:03.375022 6075 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 09:43:03.375037 6075 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 09:43:03.375046 6075 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 09:43:03.375056 6075 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 09:43:03.375058 6075 factory.go:656] Stopping watch factory\\\\nI0128 09:43:03.375050 6075 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 09:43:03.375080 6075 ovnkube.go:599] Stopped ovnkube\\\\nI0128 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:03Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.876523 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:03Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.892800 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:03Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.911910 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:03Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.927800 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:03Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.935270 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.935329 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.935408 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.935430 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.935445 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:03Z","lastTransitionTime":"2026-01-28T09:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.945810 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:03Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.960816 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:03Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.979861 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:03Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:03 crc kubenswrapper[4735]: I0128 09:43:03.994733 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:03Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.011927 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:04Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.026000 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:04Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.040091 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.040143 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.040155 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.040187 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.040201 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:04Z","lastTransitionTime":"2026-01-28T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.042983 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:04Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.143056 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.143099 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.143110 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.143128 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.143142 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:04Z","lastTransitionTime":"2026-01-28T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.245841 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.245894 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.245905 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.245927 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.245941 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:04Z","lastTransitionTime":"2026-01-28T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.349500 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.349562 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.349581 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.349615 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.349628 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:04Z","lastTransitionTime":"2026-01-28T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.452581 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.452637 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.452651 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.452670 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.452681 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:04Z","lastTransitionTime":"2026-01-28T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.502184 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.502304 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:04 crc kubenswrapper[4735]: E0128 09:43:04.502386 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:04 crc kubenswrapper[4735]: E0128 09:43:04.502483 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.502201 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:04 crc kubenswrapper[4735]: E0128 09:43:04.502588 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.556550 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.556608 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.556623 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.556649 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.556665 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:04Z","lastTransitionTime":"2026-01-28T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.633228 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 17:44:13.007365474 +0000 UTC Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.659554 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.660140 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.660211 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.660284 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.660346 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:04Z","lastTransitionTime":"2026-01-28T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.763124 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.763161 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.763170 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.763186 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.763197 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:04Z","lastTransitionTime":"2026-01-28T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.792021 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovnkube-controller/1.log" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.792727 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovnkube-controller/0.log" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.795186 4735 generic.go:334] "Generic (PLEG): container finished" podID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerID="c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf" exitCode=1 Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.795233 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerDied","Data":"c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf"} Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.795304 4735 scope.go:117] "RemoveContainer" containerID="42e5dbc0b2c9b5a602a03306b28946d1eb47e7bdf4ae56fb954bb36a5c4f1f72" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.796072 4735 scope.go:117] "RemoveContainer" containerID="c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf" Jan 28 09:43:04 crc kubenswrapper[4735]: E0128 09:43:04.796391 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6fwtd_openshift-ovn-kubernetes(9a0aebf3-b579-4254-9f8c-7163984836f9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.820504 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:04Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.839688 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:04Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.857558 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:04Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.866740 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.866802 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.866813 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.866832 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.866844 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:04Z","lastTransitionTime":"2026-01-28T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.881561 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:04Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.907552 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:04Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.926835 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:04Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.950192 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:04Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.966490 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:04Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.970645 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.970707 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.970718 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.970735 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.970766 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:04Z","lastTransitionTime":"2026-01-28T09:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:04 crc kubenswrapper[4735]: I0128 09:43:04.984253 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:04Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.000214 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:04Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.015518 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.031073 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.033262 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5"] Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.033793 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.035840 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.036546 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.058338 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e5dbc0b2c9b5a602a03306b28946d1eb47e7bdf4ae56fb954bb36a5c4f1f72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:03.374854 6075 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 09:43:03.374910 6075 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 09:43:03.374953 6075 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 09:43:03.374962 6075 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 09:43:03.374977 6075 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 09:43:03.374985 6075 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 09:43:03.374980 6075 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 09:43:03.375019 6075 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 09:43:03.375022 6075 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 09:43:03.375037 6075 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 09:43:03.375046 6075 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 09:43:03.375056 6075 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 09:43:03.375058 6075 factory.go:656] Stopping watch factory\\\\nI0128 09:43:03.375050 6075 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 09:43:03.375080 6075 ovnkube.go:599] Stopped ovnkube\\\\nI0128 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:04Z\\\",\\\"message\\\":\\\"{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 09:43:04.667551 6215 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI0128 09:43:04.667591 6215 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 3.542905ms\\\\nF0128 09:43:04.667595 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.073119 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.074148 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.074192 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.074202 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.074220 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.074232 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:05Z","lastTransitionTime":"2026-01-28T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.088880 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.089415 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6pwk\" (UniqueName: \"kubernetes.io/projected/48a21a8a-4b58-4b90-bd07-a147834171f8-kube-api-access-w6pwk\") pod \"ovnkube-control-plane-749d76644c-22sq5\" (UID: \"48a21a8a-4b58-4b90-bd07-a147834171f8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.089507 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/48a21a8a-4b58-4b90-bd07-a147834171f8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-22sq5\" (UID: \"48a21a8a-4b58-4b90-bd07-a147834171f8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.089596 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/48a21a8a-4b58-4b90-bd07-a147834171f8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-22sq5\" (UID: \"48a21a8a-4b58-4b90-bd07-a147834171f8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.089658 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/48a21a8a-4b58-4b90-bd07-a147834171f8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-22sq5\" (UID: \"48a21a8a-4b58-4b90-bd07-a147834171f8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.105415 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.122895 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.142067 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.157783 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.173062 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.177115 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.177182 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.177194 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.177217 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.177231 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:05Z","lastTransitionTime":"2026-01-28T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.188399 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.190872 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/48a21a8a-4b58-4b90-bd07-a147834171f8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-22sq5\" (UID: \"48a21a8a-4b58-4b90-bd07-a147834171f8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.190915 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/48a21a8a-4b58-4b90-bd07-a147834171f8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-22sq5\" (UID: \"48a21a8a-4b58-4b90-bd07-a147834171f8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.190982 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6pwk\" (UniqueName: \"kubernetes.io/projected/48a21a8a-4b58-4b90-bd07-a147834171f8-kube-api-access-w6pwk\") pod \"ovnkube-control-plane-749d76644c-22sq5\" (UID: \"48a21a8a-4b58-4b90-bd07-a147834171f8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.191027 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/48a21a8a-4b58-4b90-bd07-a147834171f8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-22sq5\" (UID: \"48a21a8a-4b58-4b90-bd07-a147834171f8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.191872 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/48a21a8a-4b58-4b90-bd07-a147834171f8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-22sq5\" (UID: \"48a21a8a-4b58-4b90-bd07-a147834171f8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.192155 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/48a21a8a-4b58-4b90-bd07-a147834171f8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-22sq5\" (UID: \"48a21a8a-4b58-4b90-bd07-a147834171f8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.200218 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/48a21a8a-4b58-4b90-bd07-a147834171f8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-22sq5\" (UID: \"48a21a8a-4b58-4b90-bd07-a147834171f8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.201081 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.216656 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6pwk\" (UniqueName: \"kubernetes.io/projected/48a21a8a-4b58-4b90-bd07-a147834171f8-kube-api-access-w6pwk\") pod \"ovnkube-control-plane-749d76644c-22sq5\" (UID: \"48a21a8a-4b58-4b90-bd07-a147834171f8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.225618 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e5dbc0b2c9b5a602a03306b28946d1eb47e7bdf4ae56fb954bb36a5c4f1f72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:03.374854 6075 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 09:43:03.374910 6075 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 09:43:03.374953 6075 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 09:43:03.374962 6075 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 09:43:03.374977 6075 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 09:43:03.374985 6075 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 09:43:03.374980 6075 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 09:43:03.375019 6075 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 09:43:03.375022 6075 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 09:43:03.375037 6075 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 09:43:03.375046 6075 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 09:43:03.375056 6075 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 09:43:03.375058 6075 factory.go:656] Stopping watch factory\\\\nI0128 09:43:03.375050 6075 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 09:43:03.375080 6075 ovnkube.go:599] Stopped ovnkube\\\\nI0128 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:04Z\\\",\\\"message\\\":\\\"{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 09:43:04.667551 6215 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI0128 09:43:04.667591 6215 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 3.542905ms\\\\nF0128 09:43:04.667595 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.242301 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.265048 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.281215 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.281248 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.281260 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.281278 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.281290 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:05Z","lastTransitionTime":"2026-01-28T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.283379 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.297945 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.312469 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.330205 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.349683 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" Jan 28 09:43:05 crc kubenswrapper[4735]: W0128 09:43:05.370326 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48a21a8a_4b58_4b90_bd07_a147834171f8.slice/crio-f203bfc9cab35a9d85c1cc6d6650e6d7dd2f90ba9ff1afbc45098c935778011f WatchSource:0}: Error finding container f203bfc9cab35a9d85c1cc6d6650e6d7dd2f90ba9ff1afbc45098c935778011f: Status 404 returned error can't find the container with id f203bfc9cab35a9d85c1cc6d6650e6d7dd2f90ba9ff1afbc45098c935778011f Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.384624 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.384739 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.384783 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.384836 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.384857 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:05Z","lastTransitionTime":"2026-01-28T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.487507 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.487591 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.487624 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.487655 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.487667 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:05Z","lastTransitionTime":"2026-01-28T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.591968 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.592021 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.592031 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.592052 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.592063 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:05Z","lastTransitionTime":"2026-01-28T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.633397 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:02:17.300950338 +0000 UTC Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.695296 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.695359 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.695382 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.695407 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.695421 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:05Z","lastTransitionTime":"2026-01-28T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.808592 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.808656 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.808666 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.808688 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.808707 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:05Z","lastTransitionTime":"2026-01-28T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.810630 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovnkube-controller/1.log" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.816311 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" event={"ID":"48a21a8a-4b58-4b90-bd07-a147834171f8","Type":"ContainerStarted","Data":"0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c"} Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.816382 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" event={"ID":"48a21a8a-4b58-4b90-bd07-a147834171f8","Type":"ContainerStarted","Data":"6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa"} Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.816396 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" event={"ID":"48a21a8a-4b58-4b90-bd07-a147834171f8","Type":"ContainerStarted","Data":"f203bfc9cab35a9d85c1cc6d6650e6d7dd2f90ba9ff1afbc45098c935778011f"} Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.833702 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.850443 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.876351 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.890428 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.911705 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e5dbc0b2c9b5a602a03306b28946d1eb47e7bdf4ae56fb954bb36a5c4f1f72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:03.374854 6075 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 09:43:03.374910 6075 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 09:43:03.374953 6075 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 09:43:03.374962 6075 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 09:43:03.374977 6075 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 09:43:03.374985 6075 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 09:43:03.374980 6075 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 09:43:03.375019 6075 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 09:43:03.375022 6075 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 09:43:03.375037 6075 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 09:43:03.375046 6075 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 09:43:03.375056 6075 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 09:43:03.375058 6075 factory.go:656] Stopping watch factory\\\\nI0128 09:43:03.375050 6075 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 09:43:03.375080 6075 ovnkube.go:599] Stopped ovnkube\\\\nI0128 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:04Z\\\",\\\"message\\\":\\\"{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 09:43:04.667551 6215 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI0128 09:43:04.667591 6215 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 3.542905ms\\\\nF0128 09:43:04.667595 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.912367 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.912423 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.912439 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.912459 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.912472 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:05Z","lastTransitionTime":"2026-01-28T09:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.924569 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.936122 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.948293 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.963717 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.975108 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:05 crc kubenswrapper[4735]: I0128 09:43:05.990552 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:05Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.004698 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:06Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.015699 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.015765 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.015778 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.015797 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.015815 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:06Z","lastTransitionTime":"2026-01-28T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.020786 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:06Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.035284 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:06Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.050129 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:06Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.119030 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.119076 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.119086 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.119110 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.119122 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:06Z","lastTransitionTime":"2026-01-28T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.222099 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.222153 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.222163 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.222181 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.222193 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:06Z","lastTransitionTime":"2026-01-28T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.324523 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.324573 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.324586 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.324614 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.324635 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:06Z","lastTransitionTime":"2026-01-28T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.453945 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.453991 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.454005 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.454028 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.454040 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:06Z","lastTransitionTime":"2026-01-28T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.502588 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.502701 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.502776 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:06 crc kubenswrapper[4735]: E0128 09:43:06.502820 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:06 crc kubenswrapper[4735]: E0128 09:43:06.502948 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:06 crc kubenswrapper[4735]: E0128 09:43:06.503120 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.556549 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.556622 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.556640 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.556665 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.556685 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:06Z","lastTransitionTime":"2026-01-28T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.633628 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 08:37:38.04880639 +0000 UTC Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.659594 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.659654 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.659665 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.659684 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.659699 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:06Z","lastTransitionTime":"2026-01-28T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.762740 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.762793 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.762806 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.762823 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.762836 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:06Z","lastTransitionTime":"2026-01-28T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.865427 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.865486 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.865503 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.865528 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.865547 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:06Z","lastTransitionTime":"2026-01-28T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.911113 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-zcskf"] Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.911620 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:06 crc kubenswrapper[4735]: E0128 09:43:06.911718 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.934583 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:06Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.954421 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:06Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.968260 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.968310 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.968322 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.968345 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.968359 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:06Z","lastTransitionTime":"2026-01-28T09:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.972090 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:06Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:06 crc kubenswrapper[4735]: I0128 09:43:06.989416 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zcskf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06fddb5b-5b1e-40de-9b21-e28092521208\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zcskf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:06Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.005880 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.016807 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nj5d\" (UniqueName: \"kubernetes.io/projected/06fddb5b-5b1e-40de-9b21-e28092521208-kube-api-access-5nj5d\") pod \"network-metrics-daemon-zcskf\" (UID: \"06fddb5b-5b1e-40de-9b21-e28092521208\") " pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.016876 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs\") pod \"network-metrics-daemon-zcskf\" (UID: \"06fddb5b-5b1e-40de-9b21-e28092521208\") " pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.019084 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.038918 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.053583 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.066498 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.071716 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.071882 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.071947 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.071984 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.072015 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:07Z","lastTransitionTime":"2026-01-28T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.097681 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e5dbc0b2c9b5a602a03306b28946d1eb47e7bdf4ae56fb954bb36a5c4f1f72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:03.374854 6075 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 09:43:03.374910 6075 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 09:43:03.374953 6075 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 09:43:03.374962 6075 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 09:43:03.374977 6075 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 09:43:03.374985 6075 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 09:43:03.374980 6075 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 09:43:03.375019 6075 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 09:43:03.375022 6075 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 09:43:03.375037 6075 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 09:43:03.375046 6075 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 09:43:03.375056 6075 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 09:43:03.375058 6075 factory.go:656] Stopping watch factory\\\\nI0128 09:43:03.375050 6075 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 09:43:03.375080 6075 ovnkube.go:599] Stopped ovnkube\\\\nI0128 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:04Z\\\",\\\"message\\\":\\\"{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 09:43:04.667551 6215 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI0128 09:43:04.667591 6215 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 3.542905ms\\\\nF0128 09:43:04.667595 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.111117 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.118065 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nj5d\" (UniqueName: \"kubernetes.io/projected/06fddb5b-5b1e-40de-9b21-e28092521208-kube-api-access-5nj5d\") pod \"network-metrics-daemon-zcskf\" (UID: \"06fddb5b-5b1e-40de-9b21-e28092521208\") " pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.118130 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs\") pod \"network-metrics-daemon-zcskf\" (UID: \"06fddb5b-5b1e-40de-9b21-e28092521208\") " pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:07 crc kubenswrapper[4735]: E0128 09:43:07.118314 4735 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 09:43:07 crc kubenswrapper[4735]: E0128 09:43:07.118395 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs podName:06fddb5b-5b1e-40de-9b21-e28092521208 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:07.618374068 +0000 UTC m=+40.768038451 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs") pod "network-metrics-daemon-zcskf" (UID: "06fddb5b-5b1e-40de-9b21-e28092521208") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.126957 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.138409 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nj5d\" (UniqueName: \"kubernetes.io/projected/06fddb5b-5b1e-40de-9b21-e28092521208-kube-api-access-5nj5d\") pod \"network-metrics-daemon-zcskf\" (UID: \"06fddb5b-5b1e-40de-9b21-e28092521208\") " pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.143251 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.157181 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.173838 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.175954 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.176011 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.176030 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.176052 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.176071 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:07Z","lastTransitionTime":"2026-01-28T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.188042 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.279116 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.279414 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.279504 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.279645 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.279720 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:07Z","lastTransitionTime":"2026-01-28T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.382565 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.382629 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.382647 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.382675 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.382693 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:07Z","lastTransitionTime":"2026-01-28T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.486672 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.486742 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.486788 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.486814 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.486835 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:07Z","lastTransitionTime":"2026-01-28T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.521606 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.540303 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.564318 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.584898 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zcskf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06fddb5b-5b1e-40de-9b21-e28092521208\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zcskf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.591304 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.591339 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.591348 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.591365 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.591377 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:07Z","lastTransitionTime":"2026-01-28T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.601062 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.611593 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.623632 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs\") pod \"network-metrics-daemon-zcskf\" (UID: \"06fddb5b-5b1e-40de-9b21-e28092521208\") " pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:07 crc kubenswrapper[4735]: E0128 09:43:07.623853 4735 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 09:43:07 crc kubenswrapper[4735]: E0128 09:43:07.624391 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs podName:06fddb5b-5b1e-40de-9b21-e28092521208 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:08.62437397 +0000 UTC m=+41.774038343 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs") pod "network-metrics-daemon-zcskf" (UID: "06fddb5b-5b1e-40de-9b21-e28092521208") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.625802 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.634273 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 23:37:21.317675196 +0000 UTC Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.635987 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.645415 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.661933 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e5dbc0b2c9b5a602a03306b28946d1eb47e7bdf4ae56fb954bb36a5c4f1f72\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:03Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:03.374854 6075 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 09:43:03.374910 6075 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 09:43:03.374953 6075 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 09:43:03.374962 6075 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 09:43:03.374977 6075 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 09:43:03.374985 6075 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 09:43:03.374980 6075 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 09:43:03.375019 6075 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 09:43:03.375022 6075 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 09:43:03.375037 6075 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 09:43:03.375046 6075 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 09:43:03.375056 6075 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 09:43:03.375058 6075 factory.go:656] Stopping watch factory\\\\nI0128 09:43:03.375050 6075 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 09:43:03.375080 6075 ovnkube.go:599] Stopped ovnkube\\\\nI0128 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:04Z\\\",\\\"message\\\":\\\"{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 09:43:04.667551 6215 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI0128 09:43:04.667591 6215 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 3.542905ms\\\\nF0128 09:43:04.667595 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.671404 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.681993 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.695891 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.696295 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.696335 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.696345 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.696363 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.696373 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:07Z","lastTransitionTime":"2026-01-28T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.707604 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.719138 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.729181 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:07Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.798721 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.798815 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.798838 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.798868 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.798889 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:07Z","lastTransitionTime":"2026-01-28T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.901672 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.901722 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.901731 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.901838 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:07 crc kubenswrapper[4735]: I0128 09:43:07.901852 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:07Z","lastTransitionTime":"2026-01-28T09:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.004415 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.004450 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.004458 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.004471 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.004481 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:08Z","lastTransitionTime":"2026-01-28T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.106867 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.106937 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.106958 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.106986 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.107006 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:08Z","lastTransitionTime":"2026-01-28T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.209588 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.209635 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.209646 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.209662 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.209675 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:08Z","lastTransitionTime":"2026-01-28T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.313658 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.313742 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.313813 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.313841 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.313861 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:08Z","lastTransitionTime":"2026-01-28T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.416797 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.416874 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.416901 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.416930 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.416950 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:08Z","lastTransitionTime":"2026-01-28T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.502518 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.502608 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:08 crc kubenswrapper[4735]: E0128 09:43:08.502643 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:08 crc kubenswrapper[4735]: E0128 09:43:08.502814 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.502922 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:08 crc kubenswrapper[4735]: E0128 09:43:08.503050 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.503117 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:08 crc kubenswrapper[4735]: E0128 09:43:08.503227 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.519830 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.519868 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.519878 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.519893 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.519906 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:08Z","lastTransitionTime":"2026-01-28T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.623060 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.623099 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.623108 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.623122 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.623133 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:08Z","lastTransitionTime":"2026-01-28T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.635798 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 07:08:47.502170642 +0000 UTC Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.635977 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs\") pod \"network-metrics-daemon-zcskf\" (UID: \"06fddb5b-5b1e-40de-9b21-e28092521208\") " pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:08 crc kubenswrapper[4735]: E0128 09:43:08.636179 4735 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 09:43:08 crc kubenswrapper[4735]: E0128 09:43:08.636275 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs podName:06fddb5b-5b1e-40de-9b21-e28092521208 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:10.636258941 +0000 UTC m=+43.785923314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs") pod "network-metrics-daemon-zcskf" (UID: "06fddb5b-5b1e-40de-9b21-e28092521208") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.725273 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.725321 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.725332 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.725370 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.725380 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:08Z","lastTransitionTime":"2026-01-28T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.830107 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.830163 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.830179 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.830202 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.830220 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:08Z","lastTransitionTime":"2026-01-28T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.933866 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.933894 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.933904 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.933917 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:08 crc kubenswrapper[4735]: I0128 09:43:08.933925 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:08Z","lastTransitionTime":"2026-01-28T09:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.036461 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.036515 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.036529 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.036548 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.036563 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:09Z","lastTransitionTime":"2026-01-28T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.138637 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.138683 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.138692 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.138706 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.138714 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:09Z","lastTransitionTime":"2026-01-28T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.242002 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.242393 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.242598 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.242844 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.243061 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:09Z","lastTransitionTime":"2026-01-28T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.345516 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.345552 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.345560 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.345575 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.345583 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:09Z","lastTransitionTime":"2026-01-28T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.448436 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.448477 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.448489 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.448504 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.448517 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:09Z","lastTransitionTime":"2026-01-28T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.551017 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.551068 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.551079 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.551093 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.551104 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:09Z","lastTransitionTime":"2026-01-28T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.636720 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:36:47.152940756 +0000 UTC Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.654122 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.654194 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.654216 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.654256 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.654287 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:09Z","lastTransitionTime":"2026-01-28T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.757024 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.757076 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.757089 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.757107 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.757121 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:09Z","lastTransitionTime":"2026-01-28T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.859855 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.859895 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.859907 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.859925 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.859938 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:09Z","lastTransitionTime":"2026-01-28T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.962617 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.962658 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.962671 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.962688 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:09 crc kubenswrapper[4735]: I0128 09:43:09.962703 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:09Z","lastTransitionTime":"2026-01-28T09:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.065887 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.065941 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.065952 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.065967 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.065982 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:10Z","lastTransitionTime":"2026-01-28T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.168657 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.168697 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.168710 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.168726 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.168738 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:10Z","lastTransitionTime":"2026-01-28T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.272069 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.272121 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.272137 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.272161 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.272182 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:10Z","lastTransitionTime":"2026-01-28T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.374412 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.374469 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.374485 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.374507 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.374525 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:10Z","lastTransitionTime":"2026-01-28T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.477876 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.477928 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.477945 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.477967 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.477984 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:10Z","lastTransitionTime":"2026-01-28T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.502341 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.502377 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.502393 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.502432 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:10 crc kubenswrapper[4735]: E0128 09:43:10.502561 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:10 crc kubenswrapper[4735]: E0128 09:43:10.502683 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:10 crc kubenswrapper[4735]: E0128 09:43:10.502823 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:10 crc kubenswrapper[4735]: E0128 09:43:10.502946 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.581646 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.581713 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.581725 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.581758 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.581770 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:10Z","lastTransitionTime":"2026-01-28T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.637708 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:32:39.830751333 +0000 UTC Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.657427 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs\") pod \"network-metrics-daemon-zcskf\" (UID: \"06fddb5b-5b1e-40de-9b21-e28092521208\") " pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:10 crc kubenswrapper[4735]: E0128 09:43:10.657585 4735 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 09:43:10 crc kubenswrapper[4735]: E0128 09:43:10.657696 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs podName:06fddb5b-5b1e-40de-9b21-e28092521208 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:14.657666574 +0000 UTC m=+47.807330987 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs") pod "network-metrics-daemon-zcskf" (UID: "06fddb5b-5b1e-40de-9b21-e28092521208") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.684626 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.684854 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.684878 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.684906 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.684925 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:10Z","lastTransitionTime":"2026-01-28T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.788068 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.788139 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.788160 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.788189 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.788212 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:10Z","lastTransitionTime":"2026-01-28T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.891389 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.891431 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.891441 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.891456 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.891464 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:10Z","lastTransitionTime":"2026-01-28T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.994159 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.994192 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.994200 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.994214 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:10 crc kubenswrapper[4735]: I0128 09:43:10.994222 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:10Z","lastTransitionTime":"2026-01-28T09:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.096344 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.096391 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.096405 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.096425 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.096441 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:11Z","lastTransitionTime":"2026-01-28T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.198044 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.198087 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.198098 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.198113 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.198123 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:11Z","lastTransitionTime":"2026-01-28T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.300510 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.300557 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.300567 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.300581 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.300591 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:11Z","lastTransitionTime":"2026-01-28T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.403436 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.403471 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.403482 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.403498 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.403509 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:11Z","lastTransitionTime":"2026-01-28T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.506340 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.506422 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.506444 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.506474 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.506496 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:11Z","lastTransitionTime":"2026-01-28T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.609653 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.609688 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.609702 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.609720 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.609731 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:11Z","lastTransitionTime":"2026-01-28T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.638323 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:10:58.107756322 +0000 UTC Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.712254 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.712297 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.712309 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.712327 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.712339 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:11Z","lastTransitionTime":"2026-01-28T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.814809 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.814851 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.814859 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.814873 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.814883 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:11Z","lastTransitionTime":"2026-01-28T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.917145 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.917202 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.917214 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.917233 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:11 crc kubenswrapper[4735]: I0128 09:43:11.917245 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:11Z","lastTransitionTime":"2026-01-28T09:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.020269 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.020341 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.020357 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.020375 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.020389 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:12Z","lastTransitionTime":"2026-01-28T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.123824 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.123877 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.123889 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.123908 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.123921 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:12Z","lastTransitionTime":"2026-01-28T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.226591 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.226635 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.226645 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.226661 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.226673 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:12Z","lastTransitionTime":"2026-01-28T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.330018 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.330054 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.330062 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.330075 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.330084 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:12Z","lastTransitionTime":"2026-01-28T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.432934 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.432982 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.432999 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.433022 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.433039 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:12Z","lastTransitionTime":"2026-01-28T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.502395 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.502465 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:12 crc kubenswrapper[4735]: E0128 09:43:12.502591 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.502641 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.502657 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:12 crc kubenswrapper[4735]: E0128 09:43:12.502780 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:12 crc kubenswrapper[4735]: E0128 09:43:12.502916 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:12 crc kubenswrapper[4735]: E0128 09:43:12.503125 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.535242 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.535311 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.535331 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.535355 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.535372 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:12Z","lastTransitionTime":"2026-01-28T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.637690 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.637765 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.637783 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.637804 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.637819 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:12Z","lastTransitionTime":"2026-01-28T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.638811 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 20:08:05.491961203 +0000 UTC Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.740447 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.740523 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.740536 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.740561 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.740578 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:12Z","lastTransitionTime":"2026-01-28T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.843676 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.843967 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.843984 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.844004 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.844020 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:12Z","lastTransitionTime":"2026-01-28T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.946903 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.946967 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.946990 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.947024 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:12 crc kubenswrapper[4735]: I0128 09:43:12.947047 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:12Z","lastTransitionTime":"2026-01-28T09:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.049912 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.049999 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.050023 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.050049 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.050068 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:13Z","lastTransitionTime":"2026-01-28T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.152802 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.152843 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.152851 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.152867 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.152876 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:13Z","lastTransitionTime":"2026-01-28T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.256049 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.256117 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.256136 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.256161 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.256182 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:13Z","lastTransitionTime":"2026-01-28T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.358893 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.358980 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.358993 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.359010 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.359024 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:13Z","lastTransitionTime":"2026-01-28T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.461517 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.461564 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.461575 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.461594 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.461607 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:13Z","lastTransitionTime":"2026-01-28T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.565358 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.565391 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.565402 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.565416 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.565428 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:13Z","lastTransitionTime":"2026-01-28T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.590178 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.590277 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.590301 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.590328 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.590350 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:13Z","lastTransitionTime":"2026-01-28T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:13 crc kubenswrapper[4735]: E0128 09:43:13.610935 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:13Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.615104 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.615144 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.615156 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.615173 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.615185 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:13Z","lastTransitionTime":"2026-01-28T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:13 crc kubenswrapper[4735]: E0128 09:43:13.629968 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:13Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.634057 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.634126 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.634151 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.634179 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.634202 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:13Z","lastTransitionTime":"2026-01-28T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.639379 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 00:48:37.975798699 +0000 UTC Jan 28 09:43:13 crc kubenswrapper[4735]: E0128 09:43:13.652128 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:13Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.657454 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.657498 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.657511 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.657528 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.657540 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:13Z","lastTransitionTime":"2026-01-28T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:13 crc kubenswrapper[4735]: E0128 09:43:13.672977 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:13Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.678742 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.678828 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.678841 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.678866 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.678878 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:13Z","lastTransitionTime":"2026-01-28T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:13 crc kubenswrapper[4735]: E0128 09:43:13.698938 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:13Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:13 crc kubenswrapper[4735]: E0128 09:43:13.699112 4735 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.701163 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.701216 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.701227 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.701246 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.701261 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:13Z","lastTransitionTime":"2026-01-28T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.804002 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.804033 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.804043 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.804057 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.804068 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:13Z","lastTransitionTime":"2026-01-28T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.907444 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.907536 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.907557 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.907603 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:13 crc kubenswrapper[4735]: I0128 09:43:13.907623 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:13Z","lastTransitionTime":"2026-01-28T09:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.011197 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.011287 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.011313 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.011348 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.011371 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:14Z","lastTransitionTime":"2026-01-28T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.114385 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.114444 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.114460 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.114481 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.114498 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:14Z","lastTransitionTime":"2026-01-28T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.216611 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.216670 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.216686 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.216708 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.216724 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:14Z","lastTransitionTime":"2026-01-28T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.322440 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.323205 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.324511 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.324582 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.324600 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:14Z","lastTransitionTime":"2026-01-28T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.427721 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.427779 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.427789 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.427805 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.427818 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:14Z","lastTransitionTime":"2026-01-28T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.501834 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.501863 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:14 crc kubenswrapper[4735]: E0128 09:43:14.502336 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.501926 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:14 crc kubenswrapper[4735]: E0128 09:43:14.502545 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.501900 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:14 crc kubenswrapper[4735]: E0128 09:43:14.502719 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:14 crc kubenswrapper[4735]: E0128 09:43:14.502352 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.530221 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.530279 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.530293 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.530308 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.530319 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:14Z","lastTransitionTime":"2026-01-28T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.637702 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.637784 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.637802 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.637824 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.637847 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:14Z","lastTransitionTime":"2026-01-28T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.640483 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 22:28:01.751903695 +0000 UTC Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.701238 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs\") pod \"network-metrics-daemon-zcskf\" (UID: \"06fddb5b-5b1e-40de-9b21-e28092521208\") " pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:14 crc kubenswrapper[4735]: E0128 09:43:14.701482 4735 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 09:43:14 crc kubenswrapper[4735]: E0128 09:43:14.701560 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs podName:06fddb5b-5b1e-40de-9b21-e28092521208 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:22.701536537 +0000 UTC m=+55.851200940 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs") pod "network-metrics-daemon-zcskf" (UID: "06fddb5b-5b1e-40de-9b21-e28092521208") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.741652 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.741732 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.741808 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.741838 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.741871 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:14Z","lastTransitionTime":"2026-01-28T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.844380 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.844988 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.845064 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.845163 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.845256 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:14Z","lastTransitionTime":"2026-01-28T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.947621 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.947668 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.947679 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.947697 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:14 crc kubenswrapper[4735]: I0128 09:43:14.947711 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:14Z","lastTransitionTime":"2026-01-28T09:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.049902 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.049958 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.049971 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.049988 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.050000 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:15Z","lastTransitionTime":"2026-01-28T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.153537 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.153627 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.153651 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.153682 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.153707 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:15Z","lastTransitionTime":"2026-01-28T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.256926 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.256998 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.257017 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.257046 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.257065 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:15Z","lastTransitionTime":"2026-01-28T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.360201 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.360259 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.360276 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.360299 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.360317 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:15Z","lastTransitionTime":"2026-01-28T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.402604 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.404183 4735 scope.go:117] "RemoveContainer" containerID="c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.430601 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.452574 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.462975 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.463039 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.463048 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.463066 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.463079 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:15Z","lastTransitionTime":"2026-01-28T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.475810 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.494095 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.508661 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zcskf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06fddb5b-5b1e-40de-9b21-e28092521208\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zcskf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.525433 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.542568 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.558599 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.565829 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.565872 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.565882 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.565899 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.565911 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:15Z","lastTransitionTime":"2026-01-28T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.572793 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.595964 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:04Z\\\",\\\"message\\\":\\\"{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 09:43:04.667551 6215 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI0128 09:43:04.667591 6215 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 3.542905ms\\\\nF0128 09:43:04.667595 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6fwtd_openshift-ovn-kubernetes(9a0aebf3-b579-4254-9f8c-7163984836f9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.609925 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.623682 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.639572 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.641571 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:35:25.019068731 +0000 UTC Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.653522 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.668538 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.668598 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.668611 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.668633 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.668648 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:15Z","lastTransitionTime":"2026-01-28T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.673284 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.686315 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.772816 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.772912 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.772942 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.772975 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.773000 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:15Z","lastTransitionTime":"2026-01-28T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.851612 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovnkube-controller/1.log" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.854397 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerStarted","Data":"ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847"} Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.854859 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.877987 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.878025 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.878034 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.878048 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.878057 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:15Z","lastTransitionTime":"2026-01-28T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.881560 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.901208 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zcskf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06fddb5b-5b1e-40de-9b21-e28092521208\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zcskf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.914354 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.936518 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.953143 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.966323 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.981614 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.981867 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.981942 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.982020 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.982084 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:15Z","lastTransitionTime":"2026-01-28T09:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:15 crc kubenswrapper[4735]: I0128 09:43:15.982417 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:15Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.003081 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:04Z\\\",\\\"message\\\":\\\"{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 09:43:04.667551 6215 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI0128 09:43:04.667591 6215 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 3.542905ms\\\\nF0128 09:43:04.667595 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:16Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.015405 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:16Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.028632 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:16Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.046356 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:16Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.060537 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:16Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.077208 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:16Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.085103 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.085168 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.085194 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.085246 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.085266 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:16Z","lastTransitionTime":"2026-01-28T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.098235 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:16Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.116111 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:16Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.131863 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:16Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.188382 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.188442 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.188456 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.188483 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.188495 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:16Z","lastTransitionTime":"2026-01-28T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.291147 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.291457 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.291586 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.291685 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.291772 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:16Z","lastTransitionTime":"2026-01-28T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.394290 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.394362 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.394382 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.394411 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.394428 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:16Z","lastTransitionTime":"2026-01-28T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.497706 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.497766 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.497782 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.497797 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.497806 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:16Z","lastTransitionTime":"2026-01-28T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.502271 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.502341 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.502351 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:16 crc kubenswrapper[4735]: E0128 09:43:16.502380 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:16 crc kubenswrapper[4735]: E0128 09:43:16.502570 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:16 crc kubenswrapper[4735]: E0128 09:43:16.502845 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.502920 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:16 crc kubenswrapper[4735]: E0128 09:43:16.503121 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.601180 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.601455 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.601517 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.601581 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.601674 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:16Z","lastTransitionTime":"2026-01-28T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.642438 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:34:05.009989363 +0000 UTC Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.705079 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.705152 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.705165 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.705191 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.705207 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:16Z","lastTransitionTime":"2026-01-28T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.807779 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.807866 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.807893 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.807926 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.807953 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:16Z","lastTransitionTime":"2026-01-28T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.860143 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovnkube-controller/2.log" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.861483 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovnkube-controller/1.log" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.864628 4735 generic.go:334] "Generic (PLEG): container finished" podID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerID="ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847" exitCode=1 Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.864664 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerDied","Data":"ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847"} Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.864727 4735 scope.go:117] "RemoveContainer" containerID="c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.866102 4735 scope.go:117] "RemoveContainer" containerID="ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847" Jan 28 09:43:16 crc kubenswrapper[4735]: E0128 09:43:16.866461 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6fwtd_openshift-ovn-kubernetes(9a0aebf3-b579-4254-9f8c-7163984836f9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.889725 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:16Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.907928 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:16Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.910238 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.910275 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.910288 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.910307 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.910321 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:16Z","lastTransitionTime":"2026-01-28T09:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.922605 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:16Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.936390 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:16Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.962504 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:04Z\\\",\\\"message\\\":\\\"{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 09:43:04.667551 6215 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI0128 09:43:04.667591 6215 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 3.542905ms\\\\nF0128 09:43:04.667595 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288236 6416 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288249 6416 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288295 6416 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288347 6416 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288394 6416 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.308511 6416 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 09:43:16.308555 6416 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 09:43:16.308627 6416 ovnkube.go:599] Stopped ovnkube\\\\nI0128 09:43:16.308667 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 09:43:16.308794 6416 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:16Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.974388 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.976859 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:16Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.987840 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 09:43:16 crc kubenswrapper[4735]: I0128 09:43:16.994327 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:16Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.007855 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.013642 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.013694 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.013705 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.013724 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.013735 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:17Z","lastTransitionTime":"2026-01-28T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.027173 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.042765 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.058736 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.070950 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.092187 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.105348 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.116149 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.116207 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.116225 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.116245 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.116261 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:17Z","lastTransitionTime":"2026-01-28T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.118246 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zcskf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06fddb5b-5b1e-40de-9b21-e28092521208\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zcskf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.132183 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.146895 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.163521 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.181606 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.201103 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e029483f-e084-4de3-a2c6-1c09302dd296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd947878931d41a8e267e14f37082e5422d1657e0f7ccff5e5a21370073564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c9b3ab7d1f9b65a25542cad6c28dbbbaa7c667c465d1cd4fa8541878651e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310352af4bb9dde2cef662df6ca4b2eaf8472a24caa9a025445637c4b9494267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.217420 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.219918 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.219978 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.219994 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.220015 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.220030 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:17Z","lastTransitionTime":"2026-01-28T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.236112 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.255728 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.275990 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.302512 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.322884 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.322928 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.322943 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.322963 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.322977 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:17Z","lastTransitionTime":"2026-01-28T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.322828 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.339439 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zcskf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06fddb5b-5b1e-40de-9b21-e28092521208\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zcskf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.356050 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.370654 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.384119 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.397839 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.423242 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:04Z\\\",\\\"message\\\":\\\"{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 09:43:04.667551 6215 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI0128 09:43:04.667591 6215 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 3.542905ms\\\\nF0128 09:43:04.667595 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288236 6416 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288249 6416 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288295 6416 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288347 6416 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288394 6416 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.308511 6416 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 09:43:16.308555 6416 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 09:43:16.308627 6416 ovnkube.go:599] Stopped ovnkube\\\\nI0128 09:43:16.308667 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 09:43:16.308794 6416 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.425953 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.425989 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.425999 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.426018 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.426030 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:17Z","lastTransitionTime":"2026-01-28T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.438266 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.516229 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.529356 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.529928 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.529980 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.529997 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.530019 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.530034 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:17Z","lastTransitionTime":"2026-01-28T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.551352 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96827c89c99efd6f30bda1f372818c4f8c05ef40688ad03f03fb11e85a366bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:04Z\\\",\\\"message\\\":\\\"{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.88\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 09:43:04.667551 6215 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI0128 09:43:04.667591 6215 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 3.542905ms\\\\nF0128 09:43:04.667595 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288236 6416 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288249 6416 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288295 6416 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288347 6416 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288394 6416 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.308511 6416 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 09:43:16.308555 6416 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 09:43:16.308627 6416 ovnkube.go:599] Stopped ovnkube\\\\nI0128 09:43:16.308667 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 09:43:16.308794 6416 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.567583 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.586006 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.604218 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.626135 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.631603 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.631634 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.631683 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.631699 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.631710 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:17Z","lastTransitionTime":"2026-01-28T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.642856 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:44:01.626746489 +0000 UTC Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.642960 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.658586 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.674778 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.689937 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e029483f-e084-4de3-a2c6-1c09302dd296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd947878931d41a8e267e14f37082e5422d1657e0f7ccff5e5a21370073564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c9b3ab7d1f9b65a25542cad6c28dbbbaa7c667c465d1cd4fa8541878651e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310352af4bb9dde2cef662df6ca4b2eaf8472a24caa9a025445637c4b9494267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.705737 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.726347 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.734186 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.734221 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.734234 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.734264 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.734275 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:17Z","lastTransitionTime":"2026-01-28T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.741956 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zcskf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06fddb5b-5b1e-40de-9b21-e28092521208\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zcskf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.762134 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.774868 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.791321 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.836593 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.836639 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.836651 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.836669 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.836684 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:17Z","lastTransitionTime":"2026-01-28T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.869260 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovnkube-controller/2.log" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.873615 4735 scope.go:117] "RemoveContainer" containerID="ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847" Jan 28 09:43:17 crc kubenswrapper[4735]: E0128 09:43:17.873798 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6fwtd_openshift-ovn-kubernetes(9a0aebf3-b579-4254-9f8c-7163984836f9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.887488 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.902554 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.922029 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.934196 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.939721 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.939783 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.939794 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.939809 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.939818 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:17Z","lastTransitionTime":"2026-01-28T09:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.955814 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288236 6416 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288249 6416 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288295 6416 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288347 6416 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288394 6416 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.308511 6416 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 09:43:16.308555 6416 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 09:43:16.308627 6416 ovnkube.go:599] Stopped ovnkube\\\\nI0128 09:43:16.308667 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 09:43:16.308794 6416 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6fwtd_openshift-ovn-kubernetes(9a0aebf3-b579-4254-9f8c-7163984836f9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.968016 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.981593 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:17 crc kubenswrapper[4735]: I0128 09:43:17.996484 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:17Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.012796 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:18Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.026446 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e029483f-e084-4de3-a2c6-1c09302dd296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd947878931d41a8e267e14f37082e5422d1657e0f7ccff5e5a21370073564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c9b3ab7d1f9b65a25542cad6c28dbbbaa7c667c465d1cd4fa8541878651e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310352af4bb9dde2cef662df6ca4b2eaf8472a24caa9a025445637c4b9494267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:18Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.042342 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:18Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.042836 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.042882 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.042896 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.042913 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.042927 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:18Z","lastTransitionTime":"2026-01-28T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.056133 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:18Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.073809 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:18Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.090519 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:18Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.108872 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:18Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.127531 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:18Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.142996 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zcskf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06fddb5b-5b1e-40de-9b21-e28092521208\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zcskf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:18Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.145196 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.145257 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.145271 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.145291 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.145306 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:18Z","lastTransitionTime":"2026-01-28T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.247894 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.247937 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.247949 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.247965 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.247975 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:18Z","lastTransitionTime":"2026-01-28T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.345410 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:43:18 crc kubenswrapper[4735]: E0128 09:43:18.345529 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:43:50.345507598 +0000 UTC m=+83.495171981 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.349673 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.349709 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.349719 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.349734 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.349765 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:18Z","lastTransitionTime":"2026-01-28T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.453035 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.453072 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.453093 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.453115 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:18 crc kubenswrapper[4735]: E0128 09:43:18.453239 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 09:43:18 crc kubenswrapper[4735]: E0128 09:43:18.453262 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 09:43:18 crc kubenswrapper[4735]: E0128 09:43:18.453274 4735 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:43:18 crc kubenswrapper[4735]: E0128 09:43:18.453357 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:50.45330794 +0000 UTC m=+83.602972313 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:43:18 crc kubenswrapper[4735]: E0128 09:43:18.453602 4735 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 09:43:18 crc kubenswrapper[4735]: E0128 09:43:18.453651 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:50.453638428 +0000 UTC m=+83.603302801 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 09:43:18 crc kubenswrapper[4735]: E0128 09:43:18.453686 4735 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 09:43:18 crc kubenswrapper[4735]: E0128 09:43:18.453793 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:50.453774101 +0000 UTC m=+83.603438474 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 09:43:18 crc kubenswrapper[4735]: E0128 09:43:18.454133 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 09:43:18 crc kubenswrapper[4735]: E0128 09:43:18.454168 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 09:43:18 crc kubenswrapper[4735]: E0128 09:43:18.454180 4735 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:43:18 crc kubenswrapper[4735]: E0128 09:43:18.454239 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:50.454223113 +0000 UTC m=+83.603887486 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.454853 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.454883 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.454898 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.454935 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.454947 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:18Z","lastTransitionTime":"2026-01-28T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.502079 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.502117 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.502164 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:18 crc kubenswrapper[4735]: E0128 09:43:18.502215 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.502100 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:18 crc kubenswrapper[4735]: E0128 09:43:18.502393 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:18 crc kubenswrapper[4735]: E0128 09:43:18.502382 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:18 crc kubenswrapper[4735]: E0128 09:43:18.502452 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.557199 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.557239 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.557248 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.557263 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.557273 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:18Z","lastTransitionTime":"2026-01-28T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.643600 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:52:44.680720393 +0000 UTC Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.660361 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.660433 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.660444 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.660471 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.660486 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:18Z","lastTransitionTime":"2026-01-28T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.764000 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.764085 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.764103 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.764129 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.764147 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:18Z","lastTransitionTime":"2026-01-28T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.866674 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.866789 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.866815 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.866843 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.866863 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:18Z","lastTransitionTime":"2026-01-28T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.970232 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.970285 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.970302 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.970326 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:18 crc kubenswrapper[4735]: I0128 09:43:18.970341 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:18Z","lastTransitionTime":"2026-01-28T09:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.073617 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.073677 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.073694 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.073718 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.073735 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:19Z","lastTransitionTime":"2026-01-28T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.176833 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.176876 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.176890 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.176917 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.176937 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:19Z","lastTransitionTime":"2026-01-28T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.279253 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.279310 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.279325 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.279346 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.279359 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:19Z","lastTransitionTime":"2026-01-28T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.382168 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.382456 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.382527 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.382592 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.382653 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:19Z","lastTransitionTime":"2026-01-28T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.485770 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.485824 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.485836 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.485853 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.485865 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:19Z","lastTransitionTime":"2026-01-28T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.590379 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.590432 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.590442 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.590461 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.590473 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:19Z","lastTransitionTime":"2026-01-28T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.644037 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 15:20:17.385193446 +0000 UTC Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.692844 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.692939 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.692959 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.692984 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.693001 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:19Z","lastTransitionTime":"2026-01-28T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.795535 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.795576 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.795584 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.795601 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.795610 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:19Z","lastTransitionTime":"2026-01-28T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.898253 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.898322 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.898345 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.898370 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:19 crc kubenswrapper[4735]: I0128 09:43:19.898387 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:19Z","lastTransitionTime":"2026-01-28T09:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.001676 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.001733 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.001768 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.001788 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.001801 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:20Z","lastTransitionTime":"2026-01-28T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.104993 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.105064 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.105079 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.105104 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.105120 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:20Z","lastTransitionTime":"2026-01-28T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.208276 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.208354 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.208376 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.208406 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.208428 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:20Z","lastTransitionTime":"2026-01-28T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.310889 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.310959 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.310976 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.311002 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.311019 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:20Z","lastTransitionTime":"2026-01-28T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.414139 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.414232 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.414262 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.414296 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.414320 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:20Z","lastTransitionTime":"2026-01-28T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.502526 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.502536 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.502533 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.502669 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:20 crc kubenswrapper[4735]: E0128 09:43:20.502837 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:20 crc kubenswrapper[4735]: E0128 09:43:20.503017 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:20 crc kubenswrapper[4735]: E0128 09:43:20.503162 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:20 crc kubenswrapper[4735]: E0128 09:43:20.503387 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.518007 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.518073 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.518099 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.518147 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.518188 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:20Z","lastTransitionTime":"2026-01-28T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.621158 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.621240 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.621267 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.621300 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.621325 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:20Z","lastTransitionTime":"2026-01-28T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.644913 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:41:11.852009476 +0000 UTC Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.724191 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.724295 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.724313 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.724338 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.724355 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:20Z","lastTransitionTime":"2026-01-28T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.827446 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.827497 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.827508 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.827525 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.827539 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:20Z","lastTransitionTime":"2026-01-28T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.931307 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.931377 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.931394 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.931422 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:20 crc kubenswrapper[4735]: I0128 09:43:20.931440 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:20Z","lastTransitionTime":"2026-01-28T09:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.034454 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.034520 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.034532 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.034555 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.034571 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:21Z","lastTransitionTime":"2026-01-28T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.137532 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.137586 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.137606 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.137636 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.137652 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:21Z","lastTransitionTime":"2026-01-28T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.240408 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.241098 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.241129 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.241158 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.241184 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:21Z","lastTransitionTime":"2026-01-28T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.343834 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.343876 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.343885 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.343901 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.343919 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:21Z","lastTransitionTime":"2026-01-28T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.447586 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.447621 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.447630 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.447644 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.447654 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:21Z","lastTransitionTime":"2026-01-28T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.550825 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.550873 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.550887 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.550913 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.550931 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:21Z","lastTransitionTime":"2026-01-28T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.645149 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 20:41:59.810283408 +0000 UTC Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.654529 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.654611 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.654624 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.654645 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.654658 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:21Z","lastTransitionTime":"2026-01-28T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.758440 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.758863 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.759005 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.759110 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.759324 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:21Z","lastTransitionTime":"2026-01-28T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.862825 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.862894 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.862907 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.862936 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.862953 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:21Z","lastTransitionTime":"2026-01-28T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.965942 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.965970 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.965979 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.965995 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:21 crc kubenswrapper[4735]: I0128 09:43:21.966004 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:21Z","lastTransitionTime":"2026-01-28T09:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.070281 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.070330 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.070341 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.070361 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.070374 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:22Z","lastTransitionTime":"2026-01-28T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.174309 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.174395 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.174421 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.174454 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.174473 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:22Z","lastTransitionTime":"2026-01-28T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.278701 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.279167 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.279344 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.279508 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.279642 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:22Z","lastTransitionTime":"2026-01-28T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.382642 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.382683 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.382694 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.382710 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.382723 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:22Z","lastTransitionTime":"2026-01-28T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.485850 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.486217 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.486361 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.486588 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.486743 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:22Z","lastTransitionTime":"2026-01-28T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.501629 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.501704 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:22 crc kubenswrapper[4735]: E0128 09:43:22.501831 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.501889 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:22 crc kubenswrapper[4735]: E0128 09:43:22.502082 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:22 crc kubenswrapper[4735]: E0128 09:43:22.502238 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.502481 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:22 crc kubenswrapper[4735]: E0128 09:43:22.502733 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.591999 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.592062 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.592081 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.592106 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.592126 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:22Z","lastTransitionTime":"2026-01-28T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.646093 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 08:37:57.900908588 +0000 UTC Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.695529 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.695591 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.695608 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.695633 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.695650 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:22Z","lastTransitionTime":"2026-01-28T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.800698 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs\") pod \"network-metrics-daemon-zcskf\" (UID: \"06fddb5b-5b1e-40de-9b21-e28092521208\") " pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.800854 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.800908 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.800944 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.800974 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.800998 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:22Z","lastTransitionTime":"2026-01-28T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:22 crc kubenswrapper[4735]: E0128 09:43:22.801054 4735 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 09:43:22 crc kubenswrapper[4735]: E0128 09:43:22.801223 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs podName:06fddb5b-5b1e-40de-9b21-e28092521208 nodeName:}" failed. No retries permitted until 2026-01-28 09:43:38.801184633 +0000 UTC m=+71.950849046 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs") pod "network-metrics-daemon-zcskf" (UID: "06fddb5b-5b1e-40de-9b21-e28092521208") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.903601 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.903632 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.903640 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.903654 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:22 crc kubenswrapper[4735]: I0128 09:43:22.903663 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:22Z","lastTransitionTime":"2026-01-28T09:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.006607 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.006693 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.006719 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.006788 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.006819 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:23Z","lastTransitionTime":"2026-01-28T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.109994 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.110056 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.110070 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.110091 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.110144 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:23Z","lastTransitionTime":"2026-01-28T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.214703 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.214773 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.214785 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.214806 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.214817 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:23Z","lastTransitionTime":"2026-01-28T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.317704 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.317741 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.317766 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.317786 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.317803 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:23Z","lastTransitionTime":"2026-01-28T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.419925 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.419962 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.419971 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.419984 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.419995 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:23Z","lastTransitionTime":"2026-01-28T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.522355 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.522403 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.522412 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.522423 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.522433 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:23Z","lastTransitionTime":"2026-01-28T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.626547 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.626643 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.626665 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.627121 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.627385 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:23Z","lastTransitionTime":"2026-01-28T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.647148 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:54:17.585543966 +0000 UTC Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.730956 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.731016 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.731033 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.731060 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.731091 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:23Z","lastTransitionTime":"2026-01-28T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.833519 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.833578 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.833587 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.833602 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.833611 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:23Z","lastTransitionTime":"2026-01-28T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.936271 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.936314 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.936324 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.936338 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.936349 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:23Z","lastTransitionTime":"2026-01-28T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.996518 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.996578 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.996595 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.996619 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:23 crc kubenswrapper[4735]: I0128 09:43:23.996635 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:23Z","lastTransitionTime":"2026-01-28T09:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:24 crc kubenswrapper[4735]: E0128 09:43:24.007271 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:24Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.011057 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.011101 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.011117 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.011155 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.011169 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:24Z","lastTransitionTime":"2026-01-28T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:24 crc kubenswrapper[4735]: E0128 09:43:24.023974 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:24Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.026906 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.026937 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.026949 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.026965 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.026979 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:24Z","lastTransitionTime":"2026-01-28T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:24 crc kubenswrapper[4735]: E0128 09:43:24.041026 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:24Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.044943 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.044982 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.044994 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.045013 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.045028 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:24Z","lastTransitionTime":"2026-01-28T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:24 crc kubenswrapper[4735]: E0128 09:43:24.057470 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:24Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.060725 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.060786 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.060799 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.060815 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.060828 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:24Z","lastTransitionTime":"2026-01-28T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:24 crc kubenswrapper[4735]: E0128 09:43:24.072219 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:24Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:24 crc kubenswrapper[4735]: E0128 09:43:24.072333 4735 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.073790 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.073822 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.073835 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.073853 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.073864 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:24Z","lastTransitionTime":"2026-01-28T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.177909 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.178004 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.178077 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.178106 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.178130 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:24Z","lastTransitionTime":"2026-01-28T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.281579 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.281615 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.281625 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.281640 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.281652 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:24Z","lastTransitionTime":"2026-01-28T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.384805 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.384846 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.384857 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.384871 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.384880 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:24Z","lastTransitionTime":"2026-01-28T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.488216 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.488269 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.488283 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.488303 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.488321 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:24Z","lastTransitionTime":"2026-01-28T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.502583 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.502660 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.502613 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.502613 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:24 crc kubenswrapper[4735]: E0128 09:43:24.502803 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:24 crc kubenswrapper[4735]: E0128 09:43:24.502908 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:24 crc kubenswrapper[4735]: E0128 09:43:24.503036 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:24 crc kubenswrapper[4735]: E0128 09:43:24.503149 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.590761 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.590817 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.590833 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.590851 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.590864 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:24Z","lastTransitionTime":"2026-01-28T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.648066 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 10:03:24.73820791 +0000 UTC Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.693807 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.693867 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.693884 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.693908 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.693926 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:24Z","lastTransitionTime":"2026-01-28T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.796150 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.796182 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.796190 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.796204 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.796214 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:24Z","lastTransitionTime":"2026-01-28T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.899398 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.899456 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.899474 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.899497 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:24 crc kubenswrapper[4735]: I0128 09:43:24.899515 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:24Z","lastTransitionTime":"2026-01-28T09:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.001571 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.001609 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.001618 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.001633 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.001644 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:25Z","lastTransitionTime":"2026-01-28T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.104620 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.104675 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.104685 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.104703 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.104717 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:25Z","lastTransitionTime":"2026-01-28T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.207249 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.207318 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.207337 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.207363 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.207382 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:25Z","lastTransitionTime":"2026-01-28T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.310562 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.310609 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.310624 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.310643 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.310659 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:25Z","lastTransitionTime":"2026-01-28T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.413400 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.413440 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.413450 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.413466 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.413478 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:25Z","lastTransitionTime":"2026-01-28T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.516045 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.516096 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.516110 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.516129 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.516145 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:25Z","lastTransitionTime":"2026-01-28T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.619003 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.619062 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.619072 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.619092 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.619104 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:25Z","lastTransitionTime":"2026-01-28T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.649136 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 22:08:49.914603169 +0000 UTC Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.721710 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.721828 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.721853 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.721884 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.721906 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:25Z","lastTransitionTime":"2026-01-28T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.825378 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.825451 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.825474 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.825504 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.825538 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:25Z","lastTransitionTime":"2026-01-28T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.927975 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.928121 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.928135 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.928155 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:25 crc kubenswrapper[4735]: I0128 09:43:25.928166 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:25Z","lastTransitionTime":"2026-01-28T09:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.031076 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.031155 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.031180 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.031212 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.031234 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:26Z","lastTransitionTime":"2026-01-28T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.134487 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.134569 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.134593 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.134624 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.134648 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:26Z","lastTransitionTime":"2026-01-28T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.237782 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.237850 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.237866 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.237889 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.237906 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:26Z","lastTransitionTime":"2026-01-28T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.340515 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.340624 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.340644 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.340670 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.340693 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:26Z","lastTransitionTime":"2026-01-28T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.443061 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.443127 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.443143 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.443160 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.443171 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:26Z","lastTransitionTime":"2026-01-28T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.502146 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.502211 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.502289 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:26 crc kubenswrapper[4735]: E0128 09:43:26.502435 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.502483 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:26 crc kubenswrapper[4735]: E0128 09:43:26.502714 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:26 crc kubenswrapper[4735]: E0128 09:43:26.502882 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:26 crc kubenswrapper[4735]: E0128 09:43:26.503002 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.546420 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.546500 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.546519 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.546544 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.546563 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:26Z","lastTransitionTime":"2026-01-28T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.649430 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 00:55:16.716857347 +0000 UTC Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.650296 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.650357 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.650374 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.650397 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.650411 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:26Z","lastTransitionTime":"2026-01-28T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.754137 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.754199 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.754210 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.754228 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.754239 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:26Z","lastTransitionTime":"2026-01-28T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.858044 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.858091 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.858102 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.858116 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.858127 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:26Z","lastTransitionTime":"2026-01-28T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.961229 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.961552 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.961563 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.961584 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:26 crc kubenswrapper[4735]: I0128 09:43:26.961601 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:26Z","lastTransitionTime":"2026-01-28T09:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.064323 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.064359 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.064368 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.064383 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.064399 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:27Z","lastTransitionTime":"2026-01-28T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.167111 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.167150 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.167161 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.167180 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.167197 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:27Z","lastTransitionTime":"2026-01-28T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.272613 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.272699 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.272718 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.272737 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.272770 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:27Z","lastTransitionTime":"2026-01-28T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.376204 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.376255 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.376269 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.376291 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.376304 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:27Z","lastTransitionTime":"2026-01-28T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.478544 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.478592 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.478607 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.478631 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.478648 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:27Z","lastTransitionTime":"2026-01-28T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.520388 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:27Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.537728 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:27Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.553429 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:27Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.567035 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e029483f-e084-4de3-a2c6-1c09302dd296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd947878931d41a8e267e14f37082e5422d1657e0f7ccff5e5a21370073564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c9b3ab7d1f9b65a25542cad6c28dbbbaa7c667c465d1cd4fa8541878651e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310352af4bb9dde2cef662df6ca4b2eaf8472a24caa9a025445637c4b9494267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:27Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.580422 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:27Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.582873 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.582897 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.582908 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.582924 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.582935 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:27Z","lastTransitionTime":"2026-01-28T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.596318 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:27Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.617819 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:27Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.633528 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:27Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.649548 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 10:38:41.778554504 +0000 UTC Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.652101 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:27Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.667009 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:27Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.679710 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zcskf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06fddb5b-5b1e-40de-9b21-e28092521208\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zcskf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:27Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.684961 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.685008 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.685027 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.685055 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.685080 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:27Z","lastTransitionTime":"2026-01-28T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.697538 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:27Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.713939 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:27Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.731361 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:27Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.743459 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:27Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.774592 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288236 6416 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288249 6416 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288295 6416 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288347 6416 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288394 6416 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.308511 6416 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 09:43:16.308555 6416 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 09:43:16.308627 6416 ovnkube.go:599] Stopped ovnkube\\\\nI0128 09:43:16.308667 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 09:43:16.308794 6416 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6fwtd_openshift-ovn-kubernetes(9a0aebf3-b579-4254-9f8c-7163984836f9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:27Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.791825 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.791962 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.791981 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.792035 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.792053 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:27Z","lastTransitionTime":"2026-01-28T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.793148 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:27Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.894615 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.894949 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.895071 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.895221 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.895332 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:27Z","lastTransitionTime":"2026-01-28T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.997912 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.997976 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.997990 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.998015 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:27 crc kubenswrapper[4735]: I0128 09:43:27.998057 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:27Z","lastTransitionTime":"2026-01-28T09:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.100294 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.100360 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.100378 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.100402 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.100419 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:28Z","lastTransitionTime":"2026-01-28T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.203648 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.203715 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.203730 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.203763 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.203776 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:28Z","lastTransitionTime":"2026-01-28T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.306519 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.306567 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.306579 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.306599 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.306611 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:28Z","lastTransitionTime":"2026-01-28T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.409067 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.409141 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.409164 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.409193 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.409217 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:28Z","lastTransitionTime":"2026-01-28T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.502218 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.502326 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.502327 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.502367 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:28 crc kubenswrapper[4735]: E0128 09:43:28.502826 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:28 crc kubenswrapper[4735]: E0128 09:43:28.502936 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:28 crc kubenswrapper[4735]: E0128 09:43:28.502687 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:28 crc kubenswrapper[4735]: E0128 09:43:28.503143 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.512309 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.512366 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.512384 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.512410 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.512427 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:28Z","lastTransitionTime":"2026-01-28T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.616382 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.616441 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.616460 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.616490 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.616515 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:28Z","lastTransitionTime":"2026-01-28T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.650189 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:35:46.770597294 +0000 UTC Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.720993 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.721046 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.721058 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.721079 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.721099 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:28Z","lastTransitionTime":"2026-01-28T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.825026 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.825081 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.825097 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.825119 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.825136 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:28Z","lastTransitionTime":"2026-01-28T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.928187 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.928253 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.928277 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.928305 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:28 crc kubenswrapper[4735]: I0128 09:43:28.928374 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:28Z","lastTransitionTime":"2026-01-28T09:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.032407 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.032464 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.032477 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.032495 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.032510 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:29Z","lastTransitionTime":"2026-01-28T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.135523 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.135578 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.135589 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.135609 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.135625 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:29Z","lastTransitionTime":"2026-01-28T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.239179 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.239234 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.239248 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.239272 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.239293 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:29Z","lastTransitionTime":"2026-01-28T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.341514 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.341569 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.341579 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.341602 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.341620 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:29Z","lastTransitionTime":"2026-01-28T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.444213 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.444281 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.444293 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.444314 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.444329 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:29Z","lastTransitionTime":"2026-01-28T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.503300 4735 scope.go:117] "RemoveContainer" containerID="ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847" Jan 28 09:43:29 crc kubenswrapper[4735]: E0128 09:43:29.503504 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6fwtd_openshift-ovn-kubernetes(9a0aebf3-b579-4254-9f8c-7163984836f9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.547374 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.547424 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.547435 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.547455 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.547467 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:29Z","lastTransitionTime":"2026-01-28T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.650371 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 19:10:31.408253644 +0000 UTC Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.651382 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.651437 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.651454 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.651479 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.651498 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:29Z","lastTransitionTime":"2026-01-28T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.754809 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.754868 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.754879 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.754901 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.754913 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:29Z","lastTransitionTime":"2026-01-28T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.858829 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.858885 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.858901 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.858924 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.858940 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:29Z","lastTransitionTime":"2026-01-28T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.961734 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.961804 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.961819 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.961839 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:29 crc kubenswrapper[4735]: I0128 09:43:29.961853 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:29Z","lastTransitionTime":"2026-01-28T09:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.065688 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.065781 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.065796 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.065816 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.065832 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:30Z","lastTransitionTime":"2026-01-28T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.169441 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.169519 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.169539 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.169560 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.169576 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:30Z","lastTransitionTime":"2026-01-28T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.272377 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.272414 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.272423 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.272440 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.272452 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:30Z","lastTransitionTime":"2026-01-28T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.375646 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.375701 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.375713 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.375733 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.375777 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:30Z","lastTransitionTime":"2026-01-28T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.478476 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.478517 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.478527 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.478546 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.478558 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:30Z","lastTransitionTime":"2026-01-28T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.502520 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:30 crc kubenswrapper[4735]: E0128 09:43:30.502697 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.502819 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:30 crc kubenswrapper[4735]: E0128 09:43:30.502872 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.502920 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:30 crc kubenswrapper[4735]: E0128 09:43:30.502969 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.503022 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:30 crc kubenswrapper[4735]: E0128 09:43:30.503075 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.582038 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.582103 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.582120 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.582150 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.582167 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:30Z","lastTransitionTime":"2026-01-28T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.650711 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 15:30:12.261750999 +0000 UTC Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.685342 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.685394 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.685414 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.685444 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.685463 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:30Z","lastTransitionTime":"2026-01-28T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.788347 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.788397 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.788409 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.788469 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.788485 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:30Z","lastTransitionTime":"2026-01-28T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.891957 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.892017 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.892028 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.892049 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.892064 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:30Z","lastTransitionTime":"2026-01-28T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.995442 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.995504 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.995528 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.995548 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:30 crc kubenswrapper[4735]: I0128 09:43:30.995559 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:30Z","lastTransitionTime":"2026-01-28T09:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.099048 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.099105 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.099118 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.099138 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.099152 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:31Z","lastTransitionTime":"2026-01-28T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.202425 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.202493 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.202506 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.202530 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.202544 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:31Z","lastTransitionTime":"2026-01-28T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.305811 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.305887 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.305912 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.305944 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.305968 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:31Z","lastTransitionTime":"2026-01-28T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.409551 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.409632 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.409656 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.409676 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.409725 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:31Z","lastTransitionTime":"2026-01-28T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.512371 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.512418 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.512433 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.512454 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.512470 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:31Z","lastTransitionTime":"2026-01-28T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.615126 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.615178 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.615188 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.615206 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.615218 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:31Z","lastTransitionTime":"2026-01-28T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.651653 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 22:38:06.771258515 +0000 UTC Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.718573 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.718630 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.718641 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.718658 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.718670 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:31Z","lastTransitionTime":"2026-01-28T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.822961 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.823036 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.823056 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.823089 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.823109 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:31Z","lastTransitionTime":"2026-01-28T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.926518 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.926568 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.926580 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.926599 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:31 crc kubenswrapper[4735]: I0128 09:43:31.926612 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:31Z","lastTransitionTime":"2026-01-28T09:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.029975 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.030027 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.030038 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.030060 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.030075 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:32Z","lastTransitionTime":"2026-01-28T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.137365 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.137432 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.137460 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.137482 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.137497 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:32Z","lastTransitionTime":"2026-01-28T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.240292 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.240329 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.240341 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.240357 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.240371 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:32Z","lastTransitionTime":"2026-01-28T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.343383 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.343450 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.343472 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.343500 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.343521 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:32Z","lastTransitionTime":"2026-01-28T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.446520 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.446566 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.446578 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.446595 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.446607 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:32Z","lastTransitionTime":"2026-01-28T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.502859 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.502979 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.503142 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:32 crc kubenswrapper[4735]: E0128 09:43:32.503147 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.503378 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:32 crc kubenswrapper[4735]: E0128 09:43:32.503504 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:32 crc kubenswrapper[4735]: E0128 09:43:32.503603 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:32 crc kubenswrapper[4735]: E0128 09:43:32.503667 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.548570 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.548620 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.548634 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.548652 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.548666 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:32Z","lastTransitionTime":"2026-01-28T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.651636 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.651672 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.651681 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.651698 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.651707 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:32Z","lastTransitionTime":"2026-01-28T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.651791 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 12:37:34.995011709 +0000 UTC Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.756363 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.756424 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.756440 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.756461 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.756477 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:32Z","lastTransitionTime":"2026-01-28T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.860300 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.860357 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.860382 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.860410 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.860431 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:32Z","lastTransitionTime":"2026-01-28T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.962911 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.963001 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.963049 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.963073 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:32 crc kubenswrapper[4735]: I0128 09:43:32.963170 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:32Z","lastTransitionTime":"2026-01-28T09:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.066058 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.066142 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.066157 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.066206 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.066218 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:33Z","lastTransitionTime":"2026-01-28T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.168270 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.168324 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.168340 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.168364 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.168380 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:33Z","lastTransitionTime":"2026-01-28T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.271857 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.271908 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.271918 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.271938 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.271950 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:33Z","lastTransitionTime":"2026-01-28T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.374233 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.374314 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.374329 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.374354 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.374371 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:33Z","lastTransitionTime":"2026-01-28T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.476866 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.477517 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.477605 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.477703 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.477817 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:33Z","lastTransitionTime":"2026-01-28T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.515373 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.581187 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.581251 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.581263 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.581284 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.581296 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:33Z","lastTransitionTime":"2026-01-28T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.651946 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:34:18.966995225 +0000 UTC Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.684286 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.684346 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.684359 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.684380 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.684395 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:33Z","lastTransitionTime":"2026-01-28T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.787141 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.787197 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.787209 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.787228 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.787243 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:33Z","lastTransitionTime":"2026-01-28T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.889718 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.889804 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.889821 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.889844 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.889861 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:33Z","lastTransitionTime":"2026-01-28T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.992985 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.993065 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.993082 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.993105 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:33 crc kubenswrapper[4735]: I0128 09:43:33.993123 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:33Z","lastTransitionTime":"2026-01-28T09:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.096829 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.096891 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.096901 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.096920 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.096942 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:34Z","lastTransitionTime":"2026-01-28T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.200096 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.200177 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.200220 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.200239 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.200254 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:34Z","lastTransitionTime":"2026-01-28T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.302970 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.303018 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.303030 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.303048 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.303062 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:34Z","lastTransitionTime":"2026-01-28T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.386806 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.386882 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.386900 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.386919 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.386931 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:34Z","lastTransitionTime":"2026-01-28T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:34 crc kubenswrapper[4735]: E0128 09:43:34.400635 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:34Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.405582 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.405645 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.405660 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.405679 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.405691 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:34Z","lastTransitionTime":"2026-01-28T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:34 crc kubenswrapper[4735]: E0128 09:43:34.420063 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:34Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.424379 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.424425 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.424439 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.424500 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.424526 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:34Z","lastTransitionTime":"2026-01-28T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:34 crc kubenswrapper[4735]: E0128 09:43:34.435855 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:34Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.439767 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.439815 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.439826 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.439848 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.439862 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:34Z","lastTransitionTime":"2026-01-28T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:34 crc kubenswrapper[4735]: E0128 09:43:34.453530 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:34Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.456955 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.456990 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.457001 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.457032 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.457042 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:34Z","lastTransitionTime":"2026-01-28T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:34 crc kubenswrapper[4735]: E0128 09:43:34.469085 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:34Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:34 crc kubenswrapper[4735]: E0128 09:43:34.469262 4735 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.471403 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.471468 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.471478 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.471493 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.471503 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:34Z","lastTransitionTime":"2026-01-28T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.501802 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.501872 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:34 crc kubenswrapper[4735]: E0128 09:43:34.501961 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.501993 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.502030 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:34 crc kubenswrapper[4735]: E0128 09:43:34.502010 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:34 crc kubenswrapper[4735]: E0128 09:43:34.502074 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:34 crc kubenswrapper[4735]: E0128 09:43:34.502156 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.574181 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.574231 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.574240 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.574259 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.574274 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:34Z","lastTransitionTime":"2026-01-28T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.653104 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:28:04.874649162 +0000 UTC Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.677095 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.677142 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.677154 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.677176 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.677190 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:34Z","lastTransitionTime":"2026-01-28T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.780400 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.780460 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.780475 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.780495 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.780511 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:34Z","lastTransitionTime":"2026-01-28T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.884353 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.884408 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.884423 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.884474 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.884487 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:34Z","lastTransitionTime":"2026-01-28T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.987982 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.988028 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.988044 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.988064 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:34 crc kubenswrapper[4735]: I0128 09:43:34.988076 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:34Z","lastTransitionTime":"2026-01-28T09:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.091427 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.091473 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.091482 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.091499 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.091510 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:35Z","lastTransitionTime":"2026-01-28T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.194424 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.194472 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.194482 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.194500 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.194510 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:35Z","lastTransitionTime":"2026-01-28T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.297511 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.297578 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.297595 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.297615 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.297631 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:35Z","lastTransitionTime":"2026-01-28T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.400015 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.400085 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.400102 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.400128 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.400145 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:35Z","lastTransitionTime":"2026-01-28T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.502354 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.502399 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.502411 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.502427 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.502439 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:35Z","lastTransitionTime":"2026-01-28T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.605407 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.605476 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.605494 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.605518 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.605538 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:35Z","lastTransitionTime":"2026-01-28T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.654348 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 01:06:26.255818486 +0000 UTC Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.709268 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.709313 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.709322 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.709337 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.709347 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:35Z","lastTransitionTime":"2026-01-28T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.813907 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.813983 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.813996 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.814018 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.814036 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:35Z","lastTransitionTime":"2026-01-28T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.917452 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.917512 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.917523 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.917544 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:35 crc kubenswrapper[4735]: I0128 09:43:35.917563 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:35Z","lastTransitionTime":"2026-01-28T09:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.020794 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.020855 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.020868 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.020889 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.020902 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:36Z","lastTransitionTime":"2026-01-28T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.125182 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.125236 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.125249 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.125268 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.125279 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:36Z","lastTransitionTime":"2026-01-28T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.228610 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.228658 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.228670 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.228687 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.228698 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:36Z","lastTransitionTime":"2026-01-28T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.331151 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.331199 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.331209 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.331223 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.331233 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:36Z","lastTransitionTime":"2026-01-28T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.434788 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.434836 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.434845 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.434862 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.434871 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:36Z","lastTransitionTime":"2026-01-28T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.502349 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.502435 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.502457 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.502459 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:36 crc kubenswrapper[4735]: E0128 09:43:36.503183 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:36 crc kubenswrapper[4735]: E0128 09:43:36.503347 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:36 crc kubenswrapper[4735]: E0128 09:43:36.503528 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:36 crc kubenswrapper[4735]: E0128 09:43:36.510402 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.537971 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.538027 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.538040 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.538059 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.538071 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:36Z","lastTransitionTime":"2026-01-28T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.640528 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.640578 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.640609 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.640625 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.640637 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:36Z","lastTransitionTime":"2026-01-28T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.654993 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:58:14.386452632 +0000 UTC Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.743063 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.743118 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.743126 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.743139 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.743149 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:36Z","lastTransitionTime":"2026-01-28T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.845257 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.845331 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.845345 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.845368 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.845382 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:36Z","lastTransitionTime":"2026-01-28T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.948676 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.948773 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.948792 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.948818 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:36 crc kubenswrapper[4735]: I0128 09:43:36.948836 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:36Z","lastTransitionTime":"2026-01-28T09:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.051999 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.052045 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.052056 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.052074 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.052087 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:37Z","lastTransitionTime":"2026-01-28T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.154390 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.154440 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.154453 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.154471 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.154485 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:37Z","lastTransitionTime":"2026-01-28T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.258110 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.258178 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.258197 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.258224 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.258241 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:37Z","lastTransitionTime":"2026-01-28T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.360683 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.360727 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.360736 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.360765 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.360775 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:37Z","lastTransitionTime":"2026-01-28T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.463260 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.463311 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.463328 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.463411 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.463504 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:37Z","lastTransitionTime":"2026-01-28T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.517562 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e029483f-e084-4de3-a2c6-1c09302dd296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd947878931d41a8e267e14f37082e5422d1657e0f7ccff5e5a21370073564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c9b3ab7d1f9b65a25542cad6c28dbbbaa7c667c465d1cd4fa8541878651e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310352af4bb9dde2cef662df6ca4b2eaf8472a24caa9a025445637c4b9494267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.531718 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.545117 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.561093 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.565977 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.566013 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.566027 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.566044 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.566056 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:37Z","lastTransitionTime":"2026-01-28T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.575514 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.594791 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.607717 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.621297 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zcskf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06fddb5b-5b1e-40de-9b21-e28092521208\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zcskf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.639321 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.653012 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.655534 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 04:21:32.818573166 +0000 UTC Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.667878 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.668467 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.668506 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.668522 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.668540 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.668553 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:37Z","lastTransitionTime":"2026-01-28T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.682001 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.697090 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.718147 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288236 6416 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288249 6416 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288295 6416 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288347 6416 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288394 6416 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.308511 6416 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 09:43:16.308555 6416 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 09:43:16.308627 6416 ovnkube.go:599] Stopped ovnkube\\\\nI0128 09:43:16.308667 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 09:43:16.308794 6416 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6fwtd_openshift-ovn-kubernetes(9a0aebf3-b579-4254-9f8c-7163984836f9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.728337 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.742891 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42c117b2-57a0-41f2-be48-1f97ec1bbb72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654b295d2e5a1a1b46320511f00de492771e80391afdfffab5f0b50fadbc15fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.759276 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.771132 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.771172 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.771181 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.771196 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.771207 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:37Z","lastTransitionTime":"2026-01-28T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.776376 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:37Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.878301 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.878402 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.878428 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.878461 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.878488 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:37Z","lastTransitionTime":"2026-01-28T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.981100 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.981133 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.981142 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.981155 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:37 crc kubenswrapper[4735]: I0128 09:43:37.981165 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:37Z","lastTransitionTime":"2026-01-28T09:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.083648 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.083939 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.084003 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.084071 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.084135 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:38Z","lastTransitionTime":"2026-01-28T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.186585 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.186613 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.186622 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.186634 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.186643 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:38Z","lastTransitionTime":"2026-01-28T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.289720 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.290357 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.290375 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.290391 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.290404 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:38Z","lastTransitionTime":"2026-01-28T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.392693 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.392785 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.392804 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.392830 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.392848 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:38Z","lastTransitionTime":"2026-01-28T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.494875 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.494912 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.494923 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.494938 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.494949 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:38Z","lastTransitionTime":"2026-01-28T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.502206 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.502297 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:38 crc kubenswrapper[4735]: E0128 09:43:38.502332 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.502211 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.502224 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:38 crc kubenswrapper[4735]: E0128 09:43:38.502486 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:38 crc kubenswrapper[4735]: E0128 09:43:38.502572 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:38 crc kubenswrapper[4735]: E0128 09:43:38.502618 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.597191 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.597288 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.597304 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.597322 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.597334 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:38Z","lastTransitionTime":"2026-01-28T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.656453 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 19:23:55.552197761 +0000 UTC Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.699439 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.699488 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.699500 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.699518 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.699532 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:38Z","lastTransitionTime":"2026-01-28T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.801989 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.802021 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.802029 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.802044 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.802056 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:38Z","lastTransitionTime":"2026-01-28T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.881026 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs\") pod \"network-metrics-daemon-zcskf\" (UID: \"06fddb5b-5b1e-40de-9b21-e28092521208\") " pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:38 crc kubenswrapper[4735]: E0128 09:43:38.881175 4735 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 09:43:38 crc kubenswrapper[4735]: E0128 09:43:38.881219 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs podName:06fddb5b-5b1e-40de-9b21-e28092521208 nodeName:}" failed. No retries permitted until 2026-01-28 09:44:10.881205604 +0000 UTC m=+104.030869977 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs") pod "network-metrics-daemon-zcskf" (UID: "06fddb5b-5b1e-40de-9b21-e28092521208") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.904210 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.904263 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.904276 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.904294 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:38 crc kubenswrapper[4735]: I0128 09:43:38.904307 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:38Z","lastTransitionTime":"2026-01-28T09:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.007317 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.007378 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.007395 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.007418 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.007433 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:39Z","lastTransitionTime":"2026-01-28T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.110843 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.110905 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.110916 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.110937 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.110950 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:39Z","lastTransitionTime":"2026-01-28T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.212982 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.213014 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.213021 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.213033 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.213042 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:39Z","lastTransitionTime":"2026-01-28T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.316069 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.316148 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.316172 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.316198 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.316214 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:39Z","lastTransitionTime":"2026-01-28T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.418553 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.418623 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.418641 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.418667 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.418684 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:39Z","lastTransitionTime":"2026-01-28T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.520741 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.520857 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.520884 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.520914 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.520935 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:39Z","lastTransitionTime":"2026-01-28T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.627555 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.627623 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.627638 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.627668 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.627683 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:39Z","lastTransitionTime":"2026-01-28T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.656796 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:14:05.474569097 +0000 UTC Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.730947 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.731013 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.731084 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.731113 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.731166 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:39Z","lastTransitionTime":"2026-01-28T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.834435 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.834496 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.834519 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.834574 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.834596 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:39Z","lastTransitionTime":"2026-01-28T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.937142 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.937184 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.937192 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.937208 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:39 crc kubenswrapper[4735]: I0128 09:43:39.937220 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:39Z","lastTransitionTime":"2026-01-28T09:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.040176 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.040259 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.040276 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.040303 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.040320 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:40Z","lastTransitionTime":"2026-01-28T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.143376 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.143431 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.143448 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.143474 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.143487 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:40Z","lastTransitionTime":"2026-01-28T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.254914 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.254979 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.254990 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.255010 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.255022 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:40Z","lastTransitionTime":"2026-01-28T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.357076 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.357139 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.357153 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.357167 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.357178 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:40Z","lastTransitionTime":"2026-01-28T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.460546 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.460598 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.460608 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.460625 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.460635 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:40Z","lastTransitionTime":"2026-01-28T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.502062 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.502163 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:40 crc kubenswrapper[4735]: E0128 09:43:40.502270 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.502099 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:40 crc kubenswrapper[4735]: E0128 09:43:40.502360 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.502069 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:40 crc kubenswrapper[4735]: E0128 09:43:40.502440 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:40 crc kubenswrapper[4735]: E0128 09:43:40.502576 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.563514 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.563564 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.563572 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.563588 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.563598 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:40Z","lastTransitionTime":"2026-01-28T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.657591 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:25:04.022847367 +0000 UTC Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.666240 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.666295 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.666313 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.666339 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.666358 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:40Z","lastTransitionTime":"2026-01-28T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.768851 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.768919 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.768932 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.768955 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.768970 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:40Z","lastTransitionTime":"2026-01-28T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.872387 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.872458 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.872474 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.872495 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.872510 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:40Z","lastTransitionTime":"2026-01-28T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.975330 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.975379 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.975391 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.975411 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:40 crc kubenswrapper[4735]: I0128 09:43:40.975426 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:40Z","lastTransitionTime":"2026-01-28T09:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.078280 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.078374 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.078400 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.078432 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.078451 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:41Z","lastTransitionTime":"2026-01-28T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.180229 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.180279 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.180291 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.180306 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.180319 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:41Z","lastTransitionTime":"2026-01-28T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.282848 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.282896 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.282911 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.282931 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.282943 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:41Z","lastTransitionTime":"2026-01-28T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.385709 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.385787 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.385803 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.385820 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.385833 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:41Z","lastTransitionTime":"2026-01-28T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.488397 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.488453 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.488465 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.488483 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.488495 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:41Z","lastTransitionTime":"2026-01-28T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.537823 4735 scope.go:117] "RemoveContainer" containerID="ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.592472 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.592830 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.592842 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.592858 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.592868 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:41Z","lastTransitionTime":"2026-01-28T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.658477 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 15:44:31.033607236 +0000 UTC Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.695606 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.695649 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.695685 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.695709 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.695722 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:41Z","lastTransitionTime":"2026-01-28T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.799912 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.800001 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.800015 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.800042 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.800059 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:41Z","lastTransitionTime":"2026-01-28T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.902562 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.902591 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.902600 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.902613 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.902622 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:41Z","lastTransitionTime":"2026-01-28T09:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.964804 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovnkube-controller/2.log" Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.980555 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerStarted","Data":"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061"} Jan 28 09:43:41 crc kubenswrapper[4735]: I0128 09:43:41.981836 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.003159 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.005400 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.005428 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.005439 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.005487 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.005499 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:42Z","lastTransitionTime":"2026-01-28T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.020254 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.034476 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e029483f-e084-4de3-a2c6-1c09302dd296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd947878931d41a8e267e14f37082e5422d1657e0f7ccff5e5a21370073564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c9b3ab7d1f9b65a25542cad6c28dbbbaa7c667c465d1cd4fa8541878651e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310352af4bb9dde2cef662df6ca4b2eaf8472a24caa9a025445637c4b9494267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.056165 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.075374 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.095536 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zcskf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06fddb5b-5b1e-40de-9b21-e28092521208\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zcskf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.108568 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.108607 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.108616 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.108637 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.108649 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:42Z","lastTransitionTime":"2026-01-28T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.120768 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.141260 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.163988 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.177908 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.190799 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.210570 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.210594 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.210602 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.210614 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.210624 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:42Z","lastTransitionTime":"2026-01-28T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.215819 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288236 6416 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288249 6416 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288295 6416 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288347 6416 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288394 6416 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.308511 6416 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 09:43:16.308555 6416 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 09:43:16.308627 6416 ovnkube.go:599] Stopped ovnkube\\\\nI0128 09:43:16.308667 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 09:43:16.308794 6416 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.235213 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.255148 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42c117b2-57a0-41f2-be48-1f97ec1bbb72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654b295d2e5a1a1b46320511f00de492771e80391afdfffab5f0b50fadbc15fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.269741 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.285494 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.299380 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.313067 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.313099 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.313110 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.313127 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.313140 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:42Z","lastTransitionTime":"2026-01-28T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.314373 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:42Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.416573 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.416620 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.416632 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.416651 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.416663 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:42Z","lastTransitionTime":"2026-01-28T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.502150 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.502189 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.502202 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.502150 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:42 crc kubenswrapper[4735]: E0128 09:43:42.502359 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:42 crc kubenswrapper[4735]: E0128 09:43:42.502577 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:42 crc kubenswrapper[4735]: E0128 09:43:42.502656 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:42 crc kubenswrapper[4735]: E0128 09:43:42.502724 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.519394 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.519433 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.519442 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.519461 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.519473 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:42Z","lastTransitionTime":"2026-01-28T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.622258 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.622323 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.622339 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.622362 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.622377 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:42Z","lastTransitionTime":"2026-01-28T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.658984 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:11:25.602520189 +0000 UTC Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.725238 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.725632 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.725720 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.725857 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.725951 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:42Z","lastTransitionTime":"2026-01-28T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.828845 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.828915 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.828934 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.828965 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.828989 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:42Z","lastTransitionTime":"2026-01-28T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.932899 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.932974 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.932995 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.933025 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.933049 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:42Z","lastTransitionTime":"2026-01-28T09:43:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.986302 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovnkube-controller/3.log" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.987019 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovnkube-controller/2.log" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.992160 4735 generic.go:334] "Generic (PLEG): container finished" podID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerID="e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061" exitCode=1 Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.992270 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerDied","Data":"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061"} Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.992320 4735 scope.go:117] "RemoveContainer" containerID="ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.993564 4735 scope.go:117] "RemoveContainer" containerID="e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061" Jan 28 09:43:42 crc kubenswrapper[4735]: E0128 09:43:42.993875 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6fwtd_openshift-ovn-kubernetes(9a0aebf3-b579-4254-9f8c-7163984836f9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.996655 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jxbwx_c50beb21-41ab-474f-90d2-ba27152379cf/kube-multus/0.log" Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.996774 4735 generic.go:334] "Generic (PLEG): container finished" podID="c50beb21-41ab-474f-90d2-ba27152379cf" containerID="3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021" exitCode=1 Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.996841 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jxbwx" event={"ID":"c50beb21-41ab-474f-90d2-ba27152379cf","Type":"ContainerDied","Data":"3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021"} Jan 28 09:43:42 crc kubenswrapper[4735]: I0128 09:43:42.997437 4735 scope.go:117] "RemoveContainer" containerID="3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.013189 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.030699 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e029483f-e084-4de3-a2c6-1c09302dd296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd947878931d41a8e267e14f37082e5422d1657e0f7ccff5e5a21370073564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c9b3ab7d1f9b65a25542cad6c28dbbbaa7c667c465d1cd4fa8541878651e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310352af4bb9dde2cef662df6ca4b2eaf8472a24caa9a025445637c4b9494267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.044418 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.044469 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.044481 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.044506 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.044519 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:43Z","lastTransitionTime":"2026-01-28T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.051230 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.070022 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.088879 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.105765 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.124390 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.139167 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.149250 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.149306 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.149317 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.149340 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.149353 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:43Z","lastTransitionTime":"2026-01-28T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.153985 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zcskf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06fddb5b-5b1e-40de-9b21-e28092521208\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zcskf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.170579 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42c117b2-57a0-41f2-be48-1f97ec1bbb72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654b295d2e5a1a1b46320511f00de492771e80391afdfffab5f0b50fadbc15fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.183334 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.198651 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.214681 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.226556 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.252555 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.252602 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.252612 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.252629 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.252652 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:43Z","lastTransitionTime":"2026-01-28T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.253085 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288236 6416 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288249 6416 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288295 6416 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288347 6416 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288394 6416 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.308511 6416 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 09:43:16.308555 6416 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 09:43:16.308627 6416 ovnkube.go:599] Stopped ovnkube\\\\nI0128 09:43:16.308667 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 09:43:16.308794 6416 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:42Z\\\",\\\"message\\\":\\\" Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 09:43:42.437710 6749 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 09:43:42.437530 6749 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.268401 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.285945 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.306537 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.323800 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.339204 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.353658 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.355454 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.355503 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.355517 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.355535 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.355548 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:43Z","lastTransitionTime":"2026-01-28T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.369619 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:42Z\\\",\\\"message\\\":\\\"2026-01-28T09:42:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f3c4c3a-6554-46fb-abd0-8da56f772958\\\\n2026-01-28T09:42:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f3c4c3a-6554-46fb-abd0-8da56f772958 to /host/opt/cni/bin/\\\\n2026-01-28T09:42:57Z [verbose] multus-daemon started\\\\n2026-01-28T09:42:57Z [verbose] Readiness Indicator file check\\\\n2026-01-28T09:43:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.386397 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.401701 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e029483f-e084-4de3-a2c6-1c09302dd296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd947878931d41a8e267e14f37082e5422d1657e0f7ccff5e5a21370073564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c9b3ab7d1f9b65a25542cad6c28dbbbaa7c667c465d1cd4fa8541878651e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310352af4bb9dde2cef662df6ca4b2eaf8472a24caa9a025445637c4b9494267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.423875 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.443716 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.457831 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.457869 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.457878 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.457893 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.457907 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:43Z","lastTransitionTime":"2026-01-28T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.460146 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zcskf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06fddb5b-5b1e-40de-9b21-e28092521208\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zcskf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.477662 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.491823 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.508194 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.520348 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.530832 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.549950 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ada26708cf7bef6eff4077e634d69762b893a59cfa7c67a3cb8ca18480b37847\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288236 6416 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288249 6416 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288295 6416 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 09:43:16.288347 6416 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.288394 6416 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 09:43:16.308511 6416 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 09:43:16.308555 6416 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 09:43:16.308627 6416 ovnkube.go:599] Stopped ovnkube\\\\nI0128 09:43:16.308667 6416 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 09:43:16.308794 6416 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:42Z\\\",\\\"message\\\":\\\" Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 09:43:42.437710 6749 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 09:43:42.437530 6749 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.561160 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.561287 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.561313 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.561322 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.561339 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.561350 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:43Z","lastTransitionTime":"2026-01-28T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.571343 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42c117b2-57a0-41f2-be48-1f97ec1bbb72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654b295d2e5a1a1b46320511f00de492771e80391afdfffab5f0b50fadbc15fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.586039 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:43Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.659620 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:43:57.411516199 +0000 UTC Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.664381 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.664432 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.664446 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.664469 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.664485 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:43Z","lastTransitionTime":"2026-01-28T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.793660 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.793719 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.793729 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.793767 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.793781 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:43Z","lastTransitionTime":"2026-01-28T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.897456 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.897513 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.897524 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.897546 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:43 crc kubenswrapper[4735]: I0128 09:43:43.897596 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:43Z","lastTransitionTime":"2026-01-28T09:43:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.000127 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.000182 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.000194 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.000217 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.000230 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:44Z","lastTransitionTime":"2026-01-28T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.003232 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovnkube-controller/3.log" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.014389 4735 scope.go:117] "RemoveContainer" containerID="e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061" Jan 28 09:43:44 crc kubenswrapper[4735]: E0128 09:43:44.015069 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6fwtd_openshift-ovn-kubernetes(9a0aebf3-b579-4254-9f8c-7163984836f9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.018873 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jxbwx_c50beb21-41ab-474f-90d2-ba27152379cf/kube-multus/0.log" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.018973 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jxbwx" event={"ID":"c50beb21-41ab-474f-90d2-ba27152379cf","Type":"ContainerStarted","Data":"c6c98ea668fd6d902a3e7435fed58b93cc58a04bb99900782a0e943bc253a42d"} Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.025182 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.056914 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:42Z\\\",\\\"message\\\":\\\" Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 09:43:42.437710 6749 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 09:43:42.437530 6749 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6fwtd_openshift-ovn-kubernetes(9a0aebf3-b579-4254-9f8c-7163984836f9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.070619 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.085650 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42c117b2-57a0-41f2-be48-1f97ec1bbb72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654b295d2e5a1a1b46320511f00de492771e80391afdfffab5f0b50fadbc15fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.104283 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.104358 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.104377 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.104405 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.104426 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:44Z","lastTransitionTime":"2026-01-28T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.108728 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.128765 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.144522 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.159884 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.177166 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.194829 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.207315 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.207389 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.207409 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.207440 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.207459 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:44Z","lastTransitionTime":"2026-01-28T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.210587 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e029483f-e084-4de3-a2c6-1c09302dd296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd947878931d41a8e267e14f37082e5422d1657e0f7ccff5e5a21370073564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c9b3ab7d1f9b65a25542cad6c28dbbbaa7c667c465d1cd4fa8541878651e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310352af4bb9dde2cef662df6ca4b2eaf8472a24caa9a025445637c4b9494267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.226852 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.246103 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:42Z\\\",\\\"message\\\":\\\"2026-01-28T09:42:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f3c4c3a-6554-46fb-abd0-8da56f772958\\\\n2026-01-28T09:42:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f3c4c3a-6554-46fb-abd0-8da56f772958 to /host/opt/cni/bin/\\\\n2026-01-28T09:42:57Z [verbose] multus-daemon started\\\\n2026-01-28T09:42:57Z [verbose] Readiness Indicator file check\\\\n2026-01-28T09:43:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.262819 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zcskf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06fddb5b-5b1e-40de-9b21-e28092521208\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zcskf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.279658 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.294599 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.312536 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.312582 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.312595 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.312615 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.312630 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:44Z","lastTransitionTime":"2026-01-28T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.313235 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.329320 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.342454 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.356395 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zcskf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06fddb5b-5b1e-40de-9b21-e28092521208\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zcskf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.373420 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.385375 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.401539 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.415659 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.415719 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.415731 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.415772 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.415787 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:44Z","lastTransitionTime":"2026-01-28T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.418068 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.431018 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.449663 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:42Z\\\",\\\"message\\\":\\\" Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 09:43:42.437710 6749 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 09:43:42.437530 6749 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6fwtd_openshift-ovn-kubernetes(9a0aebf3-b579-4254-9f8c-7163984836f9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.461093 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.474351 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42c117b2-57a0-41f2-be48-1f97ec1bbb72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654b295d2e5a1a1b46320511f00de492771e80391afdfffab5f0b50fadbc15fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.492514 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.502529 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.502528 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:44 crc kubenswrapper[4735]: E0128 09:43:44.502702 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.502561 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.502527 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:44 crc kubenswrapper[4735]: E0128 09:43:44.502894 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:44 crc kubenswrapper[4735]: E0128 09:43:44.502956 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:44 crc kubenswrapper[4735]: E0128 09:43:44.503046 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.508420 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.519259 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.519293 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.519301 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.519317 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.519327 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:44Z","lastTransitionTime":"2026-01-28T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.519454 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.531976 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.547044 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c98ea668fd6d902a3e7435fed58b93cc58a04bb99900782a0e943bc253a42d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:42Z\\\",\\\"message\\\":\\\"2026-01-28T09:42:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f3c4c3a-6554-46fb-abd0-8da56f772958\\\\n2026-01-28T09:42:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f3c4c3a-6554-46fb-abd0-8da56f772958 to /host/opt/cni/bin/\\\\n2026-01-28T09:42:57Z [verbose] multus-daemon started\\\\n2026-01-28T09:42:57Z [verbose] Readiness Indicator file check\\\\n2026-01-28T09:43:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.562423 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.576576 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e029483f-e084-4de3-a2c6-1c09302dd296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd947878931d41a8e267e14f37082e5422d1657e0f7ccff5e5a21370073564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c9b3ab7d1f9b65a25542cad6c28dbbbaa7c667c465d1cd4fa8541878651e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310352af4bb9dde2cef662df6ca4b2eaf8472a24caa9a025445637c4b9494267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.592511 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.622026 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.622108 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.622122 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.622146 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.622166 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:44Z","lastTransitionTime":"2026-01-28T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.661904 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:21:00.420145329 +0000 UTC Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.722054 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.722159 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.722175 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.722206 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.722227 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:44Z","lastTransitionTime":"2026-01-28T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:44 crc kubenswrapper[4735]: E0128 09:43:44.740822 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.745990 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.746039 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.746052 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.746072 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.746086 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:44Z","lastTransitionTime":"2026-01-28T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:44 crc kubenswrapper[4735]: E0128 09:43:44.758204 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.772016 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.772099 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.772127 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.772163 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.772189 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:44Z","lastTransitionTime":"2026-01-28T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:44 crc kubenswrapper[4735]: E0128 09:43:44.790493 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.796710 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.796782 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.796792 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.796811 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.796822 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:44Z","lastTransitionTime":"2026-01-28T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:44 crc kubenswrapper[4735]: E0128 09:43:44.812299 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.817244 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.817313 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.817324 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.817362 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.817374 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:44Z","lastTransitionTime":"2026-01-28T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:44 crc kubenswrapper[4735]: E0128 09:43:44.831885 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:44Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:44 crc kubenswrapper[4735]: E0128 09:43:44.832000 4735 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.833807 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.833929 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.833942 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.833982 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.833999 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:44Z","lastTransitionTime":"2026-01-28T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.937676 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.937728 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.937739 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.937780 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:44 crc kubenswrapper[4735]: I0128 09:43:44.937790 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:44Z","lastTransitionTime":"2026-01-28T09:43:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.040946 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.040997 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.041024 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.041045 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.041059 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:45Z","lastTransitionTime":"2026-01-28T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.144791 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.144849 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.144862 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.144884 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.144904 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:45Z","lastTransitionTime":"2026-01-28T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.248056 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.248110 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.248123 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.248145 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.248158 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:45Z","lastTransitionTime":"2026-01-28T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.351823 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.351882 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.351893 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.351913 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.351929 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:45Z","lastTransitionTime":"2026-01-28T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.455516 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.455578 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.455594 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.455619 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.455634 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:45Z","lastTransitionTime":"2026-01-28T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.558874 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.558943 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.558955 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.558977 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.558993 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:45Z","lastTransitionTime":"2026-01-28T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.662551 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 11:26:50.832478928 +0000 UTC Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.662711 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.662835 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.662870 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.662904 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.662936 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:45Z","lastTransitionTime":"2026-01-28T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.766979 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.767056 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.767076 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.767107 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.767130 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:45Z","lastTransitionTime":"2026-01-28T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.869910 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.869969 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.869984 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.870004 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.870018 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:45Z","lastTransitionTime":"2026-01-28T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.972526 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.972588 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.972602 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.972625 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:45 crc kubenswrapper[4735]: I0128 09:43:45.972637 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:45Z","lastTransitionTime":"2026-01-28T09:43:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.076442 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.076556 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.076587 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.076627 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.076654 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:46Z","lastTransitionTime":"2026-01-28T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.180374 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.180457 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.180478 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.180510 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.180530 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:46Z","lastTransitionTime":"2026-01-28T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.284155 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.284220 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.284238 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.284265 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.284282 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:46Z","lastTransitionTime":"2026-01-28T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.387177 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.387238 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.387251 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.387271 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.387288 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:46Z","lastTransitionTime":"2026-01-28T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.490485 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.490541 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.490558 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.490580 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.490598 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:46Z","lastTransitionTime":"2026-01-28T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.501905 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:46 crc kubenswrapper[4735]: E0128 09:43:46.502064 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.502126 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.502256 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.502153 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:46 crc kubenswrapper[4735]: E0128 09:43:46.502382 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:46 crc kubenswrapper[4735]: E0128 09:43:46.502454 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:46 crc kubenswrapper[4735]: E0128 09:43:46.502509 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.594372 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.594420 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.594437 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.594458 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.594476 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:46Z","lastTransitionTime":"2026-01-28T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.662677 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 05:02:42.29819061 +0000 UTC Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.697429 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.697486 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.697503 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.697530 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.697553 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:46Z","lastTransitionTime":"2026-01-28T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.802003 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.802080 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.802099 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.802127 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.802146 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:46Z","lastTransitionTime":"2026-01-28T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.905626 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.905704 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.905794 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.905820 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:46 crc kubenswrapper[4735]: I0128 09:43:46.905836 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:46Z","lastTransitionTime":"2026-01-28T09:43:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.009281 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.009336 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.009353 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.009377 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.009395 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:47Z","lastTransitionTime":"2026-01-28T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.112444 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.112526 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.112552 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.112586 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.112612 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:47Z","lastTransitionTime":"2026-01-28T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.216270 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.216862 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.216889 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.216921 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.216945 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:47Z","lastTransitionTime":"2026-01-28T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.320362 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.320432 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.320452 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.320476 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.320492 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:47Z","lastTransitionTime":"2026-01-28T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.423696 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.423788 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.423805 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.423830 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.423847 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:47Z","lastTransitionTime":"2026-01-28T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.519412 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.519816 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.526312 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.526348 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.526357 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.526373 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.526385 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:47Z","lastTransitionTime":"2026-01-28T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.533286 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.556727 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.571338 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.584091 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zcskf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06fddb5b-5b1e-40de-9b21-e28092521208\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zcskf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.595738 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42c117b2-57a0-41f2-be48-1f97ec1bbb72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654b295d2e5a1a1b46320511f00de492771e80391afdfffab5f0b50fadbc15fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.610595 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.628238 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.628287 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.628301 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.628320 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.628356 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:47Z","lastTransitionTime":"2026-01-28T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.630791 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.647071 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.661067 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.663182 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 16:35:31.777336794 +0000 UTC Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.681691 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:42Z\\\",\\\"message\\\":\\\" Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 09:43:42.437710 6749 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 09:43:42.437530 6749 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6fwtd_openshift-ovn-kubernetes(9a0aebf3-b579-4254-9f8c-7163984836f9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.695651 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.714329 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.728697 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.730792 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.730815 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.730823 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.730837 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.730846 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:47Z","lastTransitionTime":"2026-01-28T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.742067 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.768386 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e029483f-e084-4de3-a2c6-1c09302dd296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd947878931d41a8e267e14f37082e5422d1657e0f7ccff5e5a21370073564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c9b3ab7d1f9b65a25542cad6c28dbbbaa7c667c465d1cd4fa8541878651e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310352af4bb9dde2cef662df6ca4b2eaf8472a24caa9a025445637c4b9494267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.783899 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.796938 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c98ea668fd6d902a3e7435fed58b93cc58a04bb99900782a0e943bc253a42d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:42Z\\\",\\\"message\\\":\\\"2026-01-28T09:42:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f3c4c3a-6554-46fb-abd0-8da56f772958\\\\n2026-01-28T09:42:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f3c4c3a-6554-46fb-abd0-8da56f772958 to /host/opt/cni/bin/\\\\n2026-01-28T09:42:57Z [verbose] multus-daemon started\\\\n2026-01-28T09:42:57Z [verbose] Readiness Indicator file check\\\\n2026-01-28T09:43:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:47Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.833597 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.833657 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.833670 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.833697 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.833716 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:47Z","lastTransitionTime":"2026-01-28T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.936988 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.937337 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.937477 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.937619 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:47 crc kubenswrapper[4735]: I0128 09:43:47.937737 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:47Z","lastTransitionTime":"2026-01-28T09:43:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.040460 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.040507 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.040517 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.040534 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.040544 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:48Z","lastTransitionTime":"2026-01-28T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.143853 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.143921 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.143944 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.143972 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.143994 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:48Z","lastTransitionTime":"2026-01-28T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.247412 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.247473 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.247485 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.247506 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.247519 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:48Z","lastTransitionTime":"2026-01-28T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.351781 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.351876 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.351895 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.351927 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.351954 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:48Z","lastTransitionTime":"2026-01-28T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.455106 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.455178 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.455196 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.455219 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.455234 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:48Z","lastTransitionTime":"2026-01-28T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.501801 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.501855 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.502008 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.502095 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:48 crc kubenswrapper[4735]: E0128 09:43:48.502102 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:48 crc kubenswrapper[4735]: E0128 09:43:48.502294 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:48 crc kubenswrapper[4735]: E0128 09:43:48.502359 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:48 crc kubenswrapper[4735]: E0128 09:43:48.502525 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.558311 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.558362 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.558373 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.558389 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.558400 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:48Z","lastTransitionTime":"2026-01-28T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.660984 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.661042 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.661051 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.661069 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.661081 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:48Z","lastTransitionTime":"2026-01-28T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.664165 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 12:06:44.561564735 +0000 UTC Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.764888 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.764946 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.764958 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.764977 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.764992 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:48Z","lastTransitionTime":"2026-01-28T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.867858 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.867926 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.867942 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.867972 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.867987 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:48Z","lastTransitionTime":"2026-01-28T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.971195 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.971238 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.971249 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.971264 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:48 crc kubenswrapper[4735]: I0128 09:43:48.971278 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:48Z","lastTransitionTime":"2026-01-28T09:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.073611 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.073658 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.073666 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.073681 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.073692 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:49Z","lastTransitionTime":"2026-01-28T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.176202 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.176233 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.176242 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.176257 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.176268 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:49Z","lastTransitionTime":"2026-01-28T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.278453 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.278512 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.278526 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.278542 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.278565 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:49Z","lastTransitionTime":"2026-01-28T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.381256 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.381309 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.381325 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.381342 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.381353 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:49Z","lastTransitionTime":"2026-01-28T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.484544 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.484598 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.484609 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.484629 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.484641 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:49Z","lastTransitionTime":"2026-01-28T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.587188 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.587237 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.587245 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.587265 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.587274 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:49Z","lastTransitionTime":"2026-01-28T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.665300 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:45:13.085736311 +0000 UTC Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.689279 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.689336 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.689348 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.689366 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.689378 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:49Z","lastTransitionTime":"2026-01-28T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.792580 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.792744 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.792810 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.792844 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.792866 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:49Z","lastTransitionTime":"2026-01-28T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.901250 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.902354 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.902539 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.902706 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:49 crc kubenswrapper[4735]: I0128 09:43:49.902909 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:49Z","lastTransitionTime":"2026-01-28T09:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.006109 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.006428 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.006529 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.006631 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.006717 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:50Z","lastTransitionTime":"2026-01-28T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.110328 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.110398 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.110414 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.110439 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.110459 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:50Z","lastTransitionTime":"2026-01-28T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.213281 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.213364 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.213384 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.213412 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.213431 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:50Z","lastTransitionTime":"2026-01-28T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.316804 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.316867 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.316881 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.316939 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.316951 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:50Z","lastTransitionTime":"2026-01-28T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.411932 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:43:50 crc kubenswrapper[4735]: E0128 09:43:50.412196 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:54.412146028 +0000 UTC m=+147.561810401 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.419500 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.419576 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.419590 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.419614 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.419627 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:50Z","lastTransitionTime":"2026-01-28T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.502116 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.502218 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.502112 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:50 crc kubenswrapper[4735]: E0128 09:43:50.502290 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.502218 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:50 crc kubenswrapper[4735]: E0128 09:43:50.502467 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:50 crc kubenswrapper[4735]: E0128 09:43:50.502629 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:50 crc kubenswrapper[4735]: E0128 09:43:50.502820 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.513228 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.513282 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.513316 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.513340 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:50 crc kubenswrapper[4735]: E0128 09:43:50.513501 4735 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 09:43:50 crc kubenswrapper[4735]: E0128 09:43:50.513545 4735 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 09:43:50 crc kubenswrapper[4735]: E0128 09:43:50.513573 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 09:43:50 crc kubenswrapper[4735]: E0128 09:43:50.513600 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 09:43:50 crc kubenswrapper[4735]: E0128 09:43:50.513555 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 09:43:50 crc kubenswrapper[4735]: E0128 09:43:50.513647 4735 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 09:43:50 crc kubenswrapper[4735]: E0128 09:43:50.513652 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 09:44:54.513616753 +0000 UTC m=+147.663281126 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 09:43:50 crc kubenswrapper[4735]: E0128 09:43:50.513668 4735 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:43:50 crc kubenswrapper[4735]: E0128 09:43:50.513679 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 09:44:54.513670695 +0000 UTC m=+147.663335068 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 09:43:50 crc kubenswrapper[4735]: E0128 09:43:50.513726 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 09:44:54.513701626 +0000 UTC m=+147.663365999 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:43:50 crc kubenswrapper[4735]: E0128 09:43:50.513621 4735 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:43:50 crc kubenswrapper[4735]: E0128 09:43:50.513798 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 09:44:54.513791078 +0000 UTC m=+147.663455451 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.522547 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.522599 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.522608 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.522626 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.522642 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:50Z","lastTransitionTime":"2026-01-28T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.625380 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.625448 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.625467 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.625491 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.625509 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:50Z","lastTransitionTime":"2026-01-28T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.665829 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 06:30:35.86311028 +0000 UTC Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.728495 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.728537 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.728554 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.728576 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.728593 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:50Z","lastTransitionTime":"2026-01-28T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.831127 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.831201 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.831236 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.831268 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.831289 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:50Z","lastTransitionTime":"2026-01-28T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.934662 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.934717 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.934737 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.934800 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:50 crc kubenswrapper[4735]: I0128 09:43:50.934828 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:50Z","lastTransitionTime":"2026-01-28T09:43:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.038145 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.038210 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.038225 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.038250 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.038268 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:51Z","lastTransitionTime":"2026-01-28T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.142129 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.142197 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.142215 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.142239 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.142256 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:51Z","lastTransitionTime":"2026-01-28T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.245920 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.246103 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.246118 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.246136 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.246147 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:51Z","lastTransitionTime":"2026-01-28T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.349700 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.349809 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.349830 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.349859 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.349877 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:51Z","lastTransitionTime":"2026-01-28T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.453905 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.454024 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.454042 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.454066 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.454083 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:51Z","lastTransitionTime":"2026-01-28T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.556574 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.556619 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.556629 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.556648 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.556660 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:51Z","lastTransitionTime":"2026-01-28T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.660876 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.660928 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.660940 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.660959 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.660971 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:51Z","lastTransitionTime":"2026-01-28T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.666418 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 13:05:41.835299289 +0000 UTC Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.763823 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.763875 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.763883 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.763902 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.763917 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:51Z","lastTransitionTime":"2026-01-28T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.866114 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.866169 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.866181 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.866197 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.866262 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:51Z","lastTransitionTime":"2026-01-28T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.969262 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.969335 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.969352 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.969377 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:51 crc kubenswrapper[4735]: I0128 09:43:51.969406 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:51Z","lastTransitionTime":"2026-01-28T09:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.073112 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.073204 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.073228 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.073263 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.073284 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:52Z","lastTransitionTime":"2026-01-28T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.176549 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.176595 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.176607 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.176629 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.176643 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:52Z","lastTransitionTime":"2026-01-28T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.279431 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.279509 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.279527 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.279555 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.279575 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:52Z","lastTransitionTime":"2026-01-28T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.383640 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.383698 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.383717 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.383740 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.383802 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:52Z","lastTransitionTime":"2026-01-28T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.488199 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.488291 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.488316 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.488348 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.488373 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:52Z","lastTransitionTime":"2026-01-28T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.501647 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.501683 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.501725 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:52 crc kubenswrapper[4735]: E0128 09:43:52.501817 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.501881 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:52 crc kubenswrapper[4735]: E0128 09:43:52.501963 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:52 crc kubenswrapper[4735]: E0128 09:43:52.502095 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:52 crc kubenswrapper[4735]: E0128 09:43:52.502221 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.592067 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.592118 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.592135 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.592157 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.592173 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:52Z","lastTransitionTime":"2026-01-28T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.667494 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:19:04.851705223 +0000 UTC Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.696045 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.696140 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.696156 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.696184 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.696198 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:52Z","lastTransitionTime":"2026-01-28T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.799002 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.799080 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.799098 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.799123 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.799141 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:52Z","lastTransitionTime":"2026-01-28T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.903337 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.903407 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.903419 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.903438 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:52 crc kubenswrapper[4735]: I0128 09:43:52.903451 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:52Z","lastTransitionTime":"2026-01-28T09:43:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.007517 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.007596 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.007615 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.007641 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.007658 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:53Z","lastTransitionTime":"2026-01-28T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.111427 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.111478 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.111492 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.111510 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.111520 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:53Z","lastTransitionTime":"2026-01-28T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.214659 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.214711 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.214722 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.214739 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.214779 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:53Z","lastTransitionTime":"2026-01-28T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.317677 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.317769 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.317780 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.317800 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.317812 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:53Z","lastTransitionTime":"2026-01-28T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.420844 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.420923 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.420944 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.420970 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.420991 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:53Z","lastTransitionTime":"2026-01-28T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.523315 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.523382 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.523395 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.523417 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.523435 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:53Z","lastTransitionTime":"2026-01-28T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.625976 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.626035 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.626050 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.626071 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.626089 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:53Z","lastTransitionTime":"2026-01-28T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.667894 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 09:15:55.170643532 +0000 UTC Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.728893 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.728939 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.728952 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.728970 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.728983 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:53Z","lastTransitionTime":"2026-01-28T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.831496 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.831543 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.831554 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.831571 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.831583 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:53Z","lastTransitionTime":"2026-01-28T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.934856 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.934896 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.934907 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.934923 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:53 crc kubenswrapper[4735]: I0128 09:43:53.934935 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:53Z","lastTransitionTime":"2026-01-28T09:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.038388 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.038450 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.038466 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.038492 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.038515 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:54Z","lastTransitionTime":"2026-01-28T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.142800 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.142874 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.142893 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.142917 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.142935 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:54Z","lastTransitionTime":"2026-01-28T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.245468 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.245521 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.245532 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.245552 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.245567 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:54Z","lastTransitionTime":"2026-01-28T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.348256 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.348340 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.348354 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.348371 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.348386 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:54Z","lastTransitionTime":"2026-01-28T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.451002 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.451069 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.451091 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.451125 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.451150 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:54Z","lastTransitionTime":"2026-01-28T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.502977 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.503093 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.503039 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:54 crc kubenswrapper[4735]: E0128 09:43:54.503224 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.503251 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:54 crc kubenswrapper[4735]: E0128 09:43:54.503335 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:54 crc kubenswrapper[4735]: E0128 09:43:54.503429 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:54 crc kubenswrapper[4735]: E0128 09:43:54.503635 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.553621 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.553680 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.553697 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.553720 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.553738 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:54Z","lastTransitionTime":"2026-01-28T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.656795 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.656866 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.656885 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.656911 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.656937 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:54Z","lastTransitionTime":"2026-01-28T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.668375 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:51:15.481284578 +0000 UTC Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.759171 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.759248 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.759267 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.759294 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.759311 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:54Z","lastTransitionTime":"2026-01-28T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.861237 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.861293 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.861307 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.861321 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.861332 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:54Z","lastTransitionTime":"2026-01-28T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.964028 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.964076 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.964087 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.964104 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:54 crc kubenswrapper[4735]: I0128 09:43:54.964117 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:54Z","lastTransitionTime":"2026-01-28T09:43:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.022835 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.022868 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.022876 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.022889 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.022899 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:55Z","lastTransitionTime":"2026-01-28T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:55 crc kubenswrapper[4735]: E0128 09:43:55.036935 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.042803 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.042888 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.042914 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.042970 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.042989 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:55Z","lastTransitionTime":"2026-01-28T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:55 crc kubenswrapper[4735]: E0128 09:43:55.056197 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.061795 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.061824 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.061833 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.061868 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.061878 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:55Z","lastTransitionTime":"2026-01-28T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:55 crc kubenswrapper[4735]: E0128 09:43:55.077850 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.081492 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.081570 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.081587 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.081607 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.081632 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:55Z","lastTransitionTime":"2026-01-28T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:55 crc kubenswrapper[4735]: E0128 09:43:55.100870 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.105183 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.105236 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.105248 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.105264 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.105275 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:55Z","lastTransitionTime":"2026-01-28T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:55 crc kubenswrapper[4735]: E0128 09:43:55.123104 4735 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf1fdcc8-a86e-465f-b307-a31dff32dcb9\\\",\\\"systemUUID\\\":\\\"39f0a762-e2b9-4dea-b3e1-81189a88d57f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:55Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:55 crc kubenswrapper[4735]: E0128 09:43:55.123228 4735 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.125071 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.125117 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.125129 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.125146 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.125158 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:55Z","lastTransitionTime":"2026-01-28T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.227426 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.227470 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.227483 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.227501 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.227512 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:55Z","lastTransitionTime":"2026-01-28T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.330783 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.330815 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.330822 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.330835 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.330844 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:55Z","lastTransitionTime":"2026-01-28T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.433805 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.433853 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.433862 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.433876 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.433891 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:55Z","lastTransitionTime":"2026-01-28T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.537401 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.537650 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.537671 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.537693 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.537710 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:55Z","lastTransitionTime":"2026-01-28T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.641246 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.641309 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.641322 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.641337 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.641347 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:55Z","lastTransitionTime":"2026-01-28T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.668880 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:26:47.760382523 +0000 UTC Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.744624 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.744685 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.744701 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.744724 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.744774 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:55Z","lastTransitionTime":"2026-01-28T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.847322 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.847371 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.847382 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.847399 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.847412 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:55Z","lastTransitionTime":"2026-01-28T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.949565 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.949625 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.949638 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.949654 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:55 crc kubenswrapper[4735]: I0128 09:43:55.949670 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:55Z","lastTransitionTime":"2026-01-28T09:43:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.052110 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.052150 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.052174 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.052189 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.052199 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:56Z","lastTransitionTime":"2026-01-28T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.154367 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.154438 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.154449 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.154489 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.154509 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:56Z","lastTransitionTime":"2026-01-28T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.258047 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.258120 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.258140 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.258168 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.258190 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:56Z","lastTransitionTime":"2026-01-28T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.362048 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.362098 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.362111 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.362130 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.362140 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:56Z","lastTransitionTime":"2026-01-28T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.465184 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.465233 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.465242 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.465261 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.465272 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:56Z","lastTransitionTime":"2026-01-28T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.502597 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.502629 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.502616 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.502595 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:56 crc kubenswrapper[4735]: E0128 09:43:56.502730 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:56 crc kubenswrapper[4735]: E0128 09:43:56.502792 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:56 crc kubenswrapper[4735]: E0128 09:43:56.502840 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:56 crc kubenswrapper[4735]: E0128 09:43:56.503056 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.568685 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.568764 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.568778 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.568798 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.568817 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:56Z","lastTransitionTime":"2026-01-28T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.669270 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 23:10:16.642762934 +0000 UTC Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.672682 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.672789 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.672814 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.672841 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.672865 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:56Z","lastTransitionTime":"2026-01-28T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.776859 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.776933 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.776957 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.776992 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.777015 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:56Z","lastTransitionTime":"2026-01-28T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.880348 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.880421 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.880446 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.880481 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.880508 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:56Z","lastTransitionTime":"2026-01-28T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.983698 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.983771 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.983797 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.983819 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:56 crc kubenswrapper[4735]: I0128 09:43:56.983833 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:56Z","lastTransitionTime":"2026-01-28T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.090395 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.090973 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.091024 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.091049 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.091062 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:57Z","lastTransitionTime":"2026-01-28T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.194558 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.194612 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.194623 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.194642 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.194657 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:57Z","lastTransitionTime":"2026-01-28T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.297631 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.297710 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.297733 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.297795 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.297818 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:57Z","lastTransitionTime":"2026-01-28T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.400866 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.400952 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.400961 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.400980 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.400994 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:57Z","lastTransitionTime":"2026-01-28T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.505256 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.505431 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.505459 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.505538 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.505568 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:57Z","lastTransitionTime":"2026-01-28T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.519643 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tjk5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9476ba6-f6b7-4fef-8276-dea1f45b23f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47b7ae13c8b26a330ef57f28f179e24aad2ded516505644e6b87946a1e97ec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxw8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tjk5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.545319 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a0aebf3-b579-4254-9f8c-7163984836f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:42Z\\\",\\\"message\\\":\\\" Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 09:43:42.437710 6749 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 09:43:42.437530 6749 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify cer\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:43:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6fwtd_openshift-ovn-kubernetes(9a0aebf3-b579-4254-9f8c-7163984836f9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lnrct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6fwtd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.559082 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dqhmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616fb0bb-a83f-4c11-bf69-941e43156a1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb75d8f9a359b7935311234de01537a9542b62af07e6d6a028987f620e012e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jsjf8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dqhmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.571035 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42c117b2-57a0-41f2-be48-1f97ec1bbb72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654b295d2e5a1a1b46320511f00de492771e80391afdfffab5f0b50fadbc15fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9141f53962aff2fbfd742f32bb167d504b6dbf79bb037c99e8ac8c03bee912b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.583631 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.596103 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.607513 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f975eb5-cf83-4145-8d73-879cc67863cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3f0e8b6860291ec81cb672421612a3d9dc152e4b2713fbddc9c1cf127e6c4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lzrn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l7kzt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.609422 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.609528 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.609639 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.609733 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.609855 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:57Z","lastTransitionTime":"2026-01-28T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.619376 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d46ac42-9752-420a-8662-ed6d08ffae21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2b54d2ee33ed2cc6432dab312b644121197ffd7e9fb54311070a9ff0e2fe1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40c207ca7445006cf4cc5b6d9d32d5ff18456ff7bf690e8d62351684a385fa0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://767e3ba80ce3babf449820c066a7f7b3d022b015a0227579f2908fca49dc830d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.636598 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2f51927-89c0-4b90-84a2-ee484d692740\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02593b4310b50b6115d4fc9fcb9e75f22262b00aea6d5f8457f88cf22dc3a4b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db674fa6425807755e3c03b20c1c9b6c473908e0c0f5346bf8b029a38666f585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3f623106c85d44ba8c68c57939a87bc003be64f1aa9a0cca3a90a89b4ad2204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4274cd42e013260d9bf55768abb71dd01907b891de1cf23fa389f5c5148fc0d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://774d11be6ced02080d6f754fb9b3d9a6346873ecf9a2a0ad28849fc8161429ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f13f4c1c3be6b46bf13d17cb41160b21fb1c044fb20ee06fa1317583c71fdb05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f13f4c1c3be6b46bf13d17cb41160b21fb1c044fb20ee06fa1317583c71fdb05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac732e1f927aebb1514996364c8ed6eaf6986d1eeab2c2b069744681e4e28307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac732e1f927aebb1514996364c8ed6eaf6986d1eeab2c2b069744681e4e28307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://877d7e81ccc5eac9e8c72cc551b8b8cedf1c59ea178d681fb825c60e2ac1446a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877d7e81ccc5eac9e8c72cc551b8b8cedf1c59ea178d681fb825c60e2ac1446a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.648345 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa72b405d44429a88dbd095a4d2f22304e56eaf829a24ee93e81ebd483ce1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.659795 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9420f813-d753-4228-9e17-c1123fa7a37b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 09:42:30.955309 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 09:42:30.961008 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1590301004/tls.crt::/tmp/serving-cert-1590301004/tls.key\\\\\\\"\\\\nI0128 09:42:46.246608 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 09:42:46.274898 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 09:42:46.274938 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 09:42:46.274967 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 09:42:46.274976 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 09:42:46.284909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 09:42:46.284952 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284958 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 09:42:46.284965 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 09:42:46.284968 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 09:42:46.284972 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 09:42:46.284974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 09:42:46.285386 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 09:42:46.291289 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.670044 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:31:28.084384998 +0000 UTC Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.671177 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e029483f-e084-4de3-a2c6-1c09302dd296\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfd947878931d41a8e267e14f37082e5422d1657e0f7ccff5e5a21370073564f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87c9b3ab7d1f9b65a25542cad6c28dbbbaa7c667c465d1cd4fa8541878651e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310352af4bb9dde2cef662df6ca4b2eaf8472a24caa9a025445637c4b9494267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53761cd1ad5875e95f58222a1a5b8887ff3979904ae8851406ed66cae12698a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.682694 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.699246 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jxbwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c50beb21-41ab-474f-90d2-ba27152379cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6c98ea668fd6d902a3e7435fed58b93cc58a04bb99900782a0e943bc253a42d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T09:43:42Z\\\",\\\"message\\\":\\\"2026-01-28T09:42:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6f3c4c3a-6554-46fb-abd0-8da56f772958\\\\n2026-01-28T09:42:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6f3c4c3a-6554-46fb-abd0-8da56f772958 to /host/opt/cni/bin/\\\\n2026-01-28T09:42:57Z [verbose] multus-daemon started\\\\n2026-01-28T09:42:57Z [verbose] Readiness Indicator file check\\\\n2026-01-28T09:43:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7qsbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jxbwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.710637 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zcskf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06fddb5b-5b1e-40de-9b21-e28092521208\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nj5d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zcskf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.712076 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.712112 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.712121 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.712136 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.712147 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:57Z","lastTransitionTime":"2026-01-28T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.723420 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64cae05e0ae43081b2d7f3632d98c9ce9d11fa59d79d1d7caac13920bdf77fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec7b0d3be1f9304abba00809561966e8bf286ecebcaaf350d750ee6419f26bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.735655 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e560ddb165be2bce0e00adbc3a9c6f9ae9d16fc2d582e603783b891a65f034f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.749665 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61bae89b-69e8-478b-83d8-b493d0e4d1bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3acbe574fdbbd16ec85f082178815e7db94e5232afa26aa49b869678a0f2ae61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc8f57eaf7426e30dea4507d7ae40eab4580d1d42397accd10d4c0c489e2172d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7707979c68c4bc0a674da873b43633d64632dadde4c4c535f08cd5676dc1a7d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://258e23c96213b8ce0996ebd6b19a65a88d9f74ba45036b28b277d8f230c4eaad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d8beceef781ef9185edb5d81a3374344a90a42fae4d4d6cde5f78e48c7743bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://759ff6580ee230bb6582839ea31c0969a622cef06f23d5ef5905a7b461d1bbf7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96c94c67feb87568197306caa0c6ddbba0ac0a04dd964db279830ed5df6055dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T09:42:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T09:42:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f46qx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:42:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nk6gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.759875 4735 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48a21a8a-4b58-4b90-bd07-a147834171f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T09:43:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f894695f97488e0a1af17e7afbe7e4bd50656b1e58ff07cec4a15581e73d8fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a42d88c1f71e6bbe9dd277eac7961c18f1fb9d60824387d830a16e158a6c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T09:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6pwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T09:43:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-22sq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T09:43:57Z is after 2025-08-24T17:21:41Z" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.814084 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.814382 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.814391 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.814406 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.814415 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:57Z","lastTransitionTime":"2026-01-28T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.917302 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.917345 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.917353 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.917367 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:57 crc kubenswrapper[4735]: I0128 09:43:57.917383 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:57Z","lastTransitionTime":"2026-01-28T09:43:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.019346 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.019407 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.019425 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.019452 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.019469 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:58Z","lastTransitionTime":"2026-01-28T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.122652 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.122702 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.122713 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.122729 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.122741 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:58Z","lastTransitionTime":"2026-01-28T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.225374 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.225413 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.225425 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.225464 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.225479 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:58Z","lastTransitionTime":"2026-01-28T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.328271 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.328334 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.328354 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.328376 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.328392 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:58Z","lastTransitionTime":"2026-01-28T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.431081 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.431463 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.431600 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.431766 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.431906 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:58Z","lastTransitionTime":"2026-01-28T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.502105 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.502157 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.502267 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:43:58 crc kubenswrapper[4735]: E0128 09:43:58.502347 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:43:58 crc kubenswrapper[4735]: E0128 09:43:58.502566 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:43:58 crc kubenswrapper[4735]: E0128 09:43:58.502726 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.502944 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:43:58 crc kubenswrapper[4735]: E0128 09:43:58.503196 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.535906 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.535973 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.535982 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.536000 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.536014 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:58Z","lastTransitionTime":"2026-01-28T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.639060 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.639117 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.639131 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.639152 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.639165 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:58Z","lastTransitionTime":"2026-01-28T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.670481 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 07:39:09.11316782 +0000 UTC Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.741960 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.742022 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.742044 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.742067 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.742082 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:58Z","lastTransitionTime":"2026-01-28T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.845449 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.845489 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.845501 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.845522 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.845533 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:58Z","lastTransitionTime":"2026-01-28T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.948923 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.948995 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.949020 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.949048 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:58 crc kubenswrapper[4735]: I0128 09:43:58.949069 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:58Z","lastTransitionTime":"2026-01-28T09:43:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.052000 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.052092 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.052119 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.052154 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.052178 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:59Z","lastTransitionTime":"2026-01-28T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.155444 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.155502 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.155518 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.155540 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.155557 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:59Z","lastTransitionTime":"2026-01-28T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.258129 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.258193 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.258205 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.258235 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.258251 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:59Z","lastTransitionTime":"2026-01-28T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.361057 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.361107 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.361142 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.361161 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.361172 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:59Z","lastTransitionTime":"2026-01-28T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.463186 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.463284 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.463297 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.463322 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.463337 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:59Z","lastTransitionTime":"2026-01-28T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.503343 4735 scope.go:117] "RemoveContainer" containerID="e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061" Jan 28 09:43:59 crc kubenswrapper[4735]: E0128 09:43:59.503790 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6fwtd_openshift-ovn-kubernetes(9a0aebf3-b579-4254-9f8c-7163984836f9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.566243 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.566324 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.566352 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.566383 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.566408 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:59Z","lastTransitionTime":"2026-01-28T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.670119 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.670181 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.670192 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.670212 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.670226 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:59Z","lastTransitionTime":"2026-01-28T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.670879 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 18:26:49.545601262 +0000 UTC Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.773271 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.773340 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.773362 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.773394 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.773418 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:59Z","lastTransitionTime":"2026-01-28T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.876385 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.876448 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.876463 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.876490 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.876505 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:59Z","lastTransitionTime":"2026-01-28T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.979167 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.979228 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.979251 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.979282 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:43:59 crc kubenswrapper[4735]: I0128 09:43:59.979305 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:43:59Z","lastTransitionTime":"2026-01-28T09:43:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.082546 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.082586 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.082597 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.082612 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.082623 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:00Z","lastTransitionTime":"2026-01-28T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.185117 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.185253 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.185274 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.185520 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.185548 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:00Z","lastTransitionTime":"2026-01-28T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.289102 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.289204 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.289231 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.289258 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.289276 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:00Z","lastTransitionTime":"2026-01-28T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.392001 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.392041 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.392052 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.392069 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.392082 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:00Z","lastTransitionTime":"2026-01-28T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.495588 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.495645 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.495662 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.495684 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.495701 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:00Z","lastTransitionTime":"2026-01-28T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.501947 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.502021 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.502058 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:00 crc kubenswrapper[4735]: E0128 09:44:00.502119 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.502162 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:00 crc kubenswrapper[4735]: E0128 09:44:00.502330 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:00 crc kubenswrapper[4735]: E0128 09:44:00.502389 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:00 crc kubenswrapper[4735]: E0128 09:44:00.502629 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.598529 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.598571 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.598580 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.598596 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.598606 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:00Z","lastTransitionTime":"2026-01-28T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.671395 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 02:31:05.382418127 +0000 UTC Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.701478 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.701585 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.701605 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.701630 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.701648 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:00Z","lastTransitionTime":"2026-01-28T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.804815 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.804874 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.804893 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.804921 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.804939 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:00Z","lastTransitionTime":"2026-01-28T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.907171 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.907210 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.907219 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.907235 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:00 crc kubenswrapper[4735]: I0128 09:44:00.907245 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:00Z","lastTransitionTime":"2026-01-28T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.009573 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.009620 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.009630 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.009648 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.009659 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:01Z","lastTransitionTime":"2026-01-28T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.133468 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.133513 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.133521 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.133546 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.133556 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:01Z","lastTransitionTime":"2026-01-28T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.237333 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.237416 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.237442 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.237472 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.237486 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:01Z","lastTransitionTime":"2026-01-28T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.339891 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.339955 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.339971 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.339991 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.340003 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:01Z","lastTransitionTime":"2026-01-28T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.442984 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.443041 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.443054 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.443074 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.443086 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:01Z","lastTransitionTime":"2026-01-28T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.546362 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.546430 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.546440 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.546460 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.546473 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:01Z","lastTransitionTime":"2026-01-28T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.649411 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.649509 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.649529 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.649557 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.649578 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:01Z","lastTransitionTime":"2026-01-28T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.672302 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 11:57:54.611881307 +0000 UTC Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.751956 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.752037 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.752057 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.752085 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.752103 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:01Z","lastTransitionTime":"2026-01-28T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.855826 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.855890 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.855905 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.855929 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.855942 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:01Z","lastTransitionTime":"2026-01-28T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.959324 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.959389 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.959407 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.959433 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:01 crc kubenswrapper[4735]: I0128 09:44:01.959452 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:01Z","lastTransitionTime":"2026-01-28T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.062826 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.062888 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.062911 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.062940 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.062963 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:02Z","lastTransitionTime":"2026-01-28T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.165188 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.165273 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.165296 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.165323 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.165346 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:02Z","lastTransitionTime":"2026-01-28T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.268700 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.268824 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.268837 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.268859 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.268872 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:02Z","lastTransitionTime":"2026-01-28T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.372617 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.372699 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.372711 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.372732 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.372794 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:02Z","lastTransitionTime":"2026-01-28T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.476166 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.476271 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.476298 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.476328 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.476352 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:02Z","lastTransitionTime":"2026-01-28T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.502207 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.502308 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.502400 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:02 crc kubenswrapper[4735]: E0128 09:44:02.502518 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.502587 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:02 crc kubenswrapper[4735]: E0128 09:44:02.502877 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:02 crc kubenswrapper[4735]: E0128 09:44:02.502999 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:02 crc kubenswrapper[4735]: E0128 09:44:02.503095 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.579112 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.579157 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.579169 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.579184 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.579194 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:02Z","lastTransitionTime":"2026-01-28T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.673410 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:56:48.518962069 +0000 UTC Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.682220 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.682288 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.682303 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.682327 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.682343 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:02Z","lastTransitionTime":"2026-01-28T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.785381 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.785430 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.785441 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.785462 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.785478 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:02Z","lastTransitionTime":"2026-01-28T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.887554 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.887612 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.887633 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.887653 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.887668 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:02Z","lastTransitionTime":"2026-01-28T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.989822 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.989877 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.989896 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.989916 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:02 crc kubenswrapper[4735]: I0128 09:44:02.989935 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:02Z","lastTransitionTime":"2026-01-28T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.093456 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.093576 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.093597 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.093619 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.093635 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:03Z","lastTransitionTime":"2026-01-28T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.203139 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.203193 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.203206 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.203232 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.203245 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:03Z","lastTransitionTime":"2026-01-28T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.307181 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.307243 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.307259 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.307580 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.307614 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:03Z","lastTransitionTime":"2026-01-28T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.411028 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.411083 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.411106 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.411133 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.411156 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:03Z","lastTransitionTime":"2026-01-28T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.514190 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.514236 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.514244 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.514257 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.514266 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:03Z","lastTransitionTime":"2026-01-28T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.616922 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.616971 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.616982 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.617000 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.617012 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:03Z","lastTransitionTime":"2026-01-28T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.674246 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 04:02:35.031290122 +0000 UTC Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.720175 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.720230 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.720241 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.720259 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.720272 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:03Z","lastTransitionTime":"2026-01-28T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.823490 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.823538 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.823550 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.823568 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.823580 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:03Z","lastTransitionTime":"2026-01-28T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.926829 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.926886 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.926906 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.926930 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:03 crc kubenswrapper[4735]: I0128 09:44:03.926947 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:03Z","lastTransitionTime":"2026-01-28T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.029504 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.029586 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.029605 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.029633 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.029650 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:04Z","lastTransitionTime":"2026-01-28T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.133118 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.133198 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.133222 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.133252 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.133276 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:04Z","lastTransitionTime":"2026-01-28T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.236535 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.236602 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.236617 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.236640 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.236655 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:04Z","lastTransitionTime":"2026-01-28T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.340079 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.340154 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.340172 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.340200 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.340220 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:04Z","lastTransitionTime":"2026-01-28T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.443056 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.443124 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.443145 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.443173 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.443193 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:04Z","lastTransitionTime":"2026-01-28T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.501933 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.502016 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.501963 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.501963 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:04 crc kubenswrapper[4735]: E0128 09:44:04.503056 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:04 crc kubenswrapper[4735]: E0128 09:44:04.503168 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:04 crc kubenswrapper[4735]: E0128 09:44:04.503340 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:04 crc kubenswrapper[4735]: E0128 09:44:04.503506 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.546425 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.546524 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.546539 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.546560 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.546571 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:04Z","lastTransitionTime":"2026-01-28T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.649626 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.649711 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.649725 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.649968 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.649982 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:04Z","lastTransitionTime":"2026-01-28T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.675077 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:53:38.642149835 +0000 UTC Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.753282 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.753362 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.753383 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.753408 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.753427 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:04Z","lastTransitionTime":"2026-01-28T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.856945 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.857019 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.857043 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.857073 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.857094 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:04Z","lastTransitionTime":"2026-01-28T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.960221 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.960271 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.960281 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.960297 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:04 crc kubenswrapper[4735]: I0128 09:44:04.960309 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:04Z","lastTransitionTime":"2026-01-28T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.064397 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.064456 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.064472 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.064494 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.064510 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:05Z","lastTransitionTime":"2026-01-28T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.167498 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.167551 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.167568 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.167592 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.167610 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:05Z","lastTransitionTime":"2026-01-28T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.253242 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.253313 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.253330 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.253355 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.253373 4735 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T09:44:05Z","lastTransitionTime":"2026-01-28T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.320718 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g"] Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.321374 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.325445 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.325533 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.325947 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.326210 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.382603 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-jxbwx" podStartSLOduration=73.382579666 podStartE2EDuration="1m13.382579666s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:05.369904998 +0000 UTC m=+98.519569411" watchObservedRunningTime="2026-01-28 09:44:05.382579666 +0000 UTC m=+98.532244049" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.422325 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.422299724 podStartE2EDuration="49.422299724s" podCreationTimestamp="2026-01-28 09:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:05.421199335 +0000 UTC m=+98.570863718" watchObservedRunningTime="2026-01-28 09:44:05.422299724 +0000 UTC m=+98.571964097" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.422708 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.422701724 podStartE2EDuration="1m19.422701724s" podCreationTimestamp="2026-01-28 09:42:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:05.403408395 +0000 UTC m=+98.553072778" watchObservedRunningTime="2026-01-28 09:44:05.422701724 +0000 UTC m=+98.572366097" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.438392 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-nk6gd" podStartSLOduration=73.438370519 podStartE2EDuration="1m13.438370519s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:05.438278637 +0000 UTC m=+98.587943020" watchObservedRunningTime="2026-01-28 09:44:05.438370519 +0000 UTC m=+98.588034892" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.470578 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-22sq5" podStartSLOduration=73.470552532 podStartE2EDuration="1m13.470552532s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:05.455801471 +0000 UTC m=+98.605465844" watchObservedRunningTime="2026-01-28 09:44:05.470552532 +0000 UTC m=+98.620216905" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.491915 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2acb2036-b96d-4cdf-9a87-b90c4019a413-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-v7z9g\" (UID: \"2acb2036-b96d-4cdf-9a87-b90c4019a413\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.491984 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2acb2036-b96d-4cdf-9a87-b90c4019a413-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-v7z9g\" (UID: \"2acb2036-b96d-4cdf-9a87-b90c4019a413\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.492023 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2acb2036-b96d-4cdf-9a87-b90c4019a413-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-v7z9g\" (UID: \"2acb2036-b96d-4cdf-9a87-b90c4019a413\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.492083 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2acb2036-b96d-4cdf-9a87-b90c4019a413-service-ca\") pod \"cluster-version-operator-5c965bbfc6-v7z9g\" (UID: \"2acb2036-b96d-4cdf-9a87-b90c4019a413\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.492113 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2acb2036-b96d-4cdf-9a87-b90c4019a413-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-v7z9g\" (UID: \"2acb2036-b96d-4cdf-9a87-b90c4019a413\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.538126 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podStartSLOduration=73.53809915 podStartE2EDuration="1m13.53809915s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:05.537840362 +0000 UTC m=+98.687504735" watchObservedRunningTime="2026-01-28 09:44:05.53809915 +0000 UTC m=+98.687763523" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.579380 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tjk5s" podStartSLOduration=73.579358457 podStartE2EDuration="1m13.579358457s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:05.553713874 +0000 UTC m=+98.703378267" watchObservedRunningTime="2026-01-28 09:44:05.579358457 +0000 UTC m=+98.729022840" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.592191 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-dqhmn" podStartSLOduration=73.592172768 podStartE2EDuration="1m13.592172768s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:05.59185507 +0000 UTC m=+98.741519453" watchObservedRunningTime="2026-01-28 09:44:05.592172768 +0000 UTC m=+98.741837141" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.593139 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2acb2036-b96d-4cdf-9a87-b90c4019a413-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-v7z9g\" (UID: \"2acb2036-b96d-4cdf-9a87-b90c4019a413\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.593179 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2acb2036-b96d-4cdf-9a87-b90c4019a413-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-v7z9g\" (UID: \"2acb2036-b96d-4cdf-9a87-b90c4019a413\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.593204 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2acb2036-b96d-4cdf-9a87-b90c4019a413-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-v7z9g\" (UID: \"2acb2036-b96d-4cdf-9a87-b90c4019a413\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.593246 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2acb2036-b96d-4cdf-9a87-b90c4019a413-service-ca\") pod \"cluster-version-operator-5c965bbfc6-v7z9g\" (UID: \"2acb2036-b96d-4cdf-9a87-b90c4019a413\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.593271 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2acb2036-b96d-4cdf-9a87-b90c4019a413-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-v7z9g\" (UID: \"2acb2036-b96d-4cdf-9a87-b90c4019a413\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.593245 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2acb2036-b96d-4cdf-9a87-b90c4019a413-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-v7z9g\" (UID: \"2acb2036-b96d-4cdf-9a87-b90c4019a413\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.593384 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2acb2036-b96d-4cdf-9a87-b90c4019a413-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-v7z9g\" (UID: \"2acb2036-b96d-4cdf-9a87-b90c4019a413\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.594048 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2acb2036-b96d-4cdf-9a87-b90c4019a413-service-ca\") pod \"cluster-version-operator-5c965bbfc6-v7z9g\" (UID: \"2acb2036-b96d-4cdf-9a87-b90c4019a413\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.602637 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=32.602623609 podStartE2EDuration="32.602623609s" podCreationTimestamp="2026-01-28 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:05.602075164 +0000 UTC m=+98.751739537" watchObservedRunningTime="2026-01-28 09:44:05.602623609 +0000 UTC m=+98.752287982" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.605128 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2acb2036-b96d-4cdf-9a87-b90c4019a413-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-v7z9g\" (UID: \"2acb2036-b96d-4cdf-9a87-b90c4019a413\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.617344 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2acb2036-b96d-4cdf-9a87-b90c4019a413-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-v7z9g\" (UID: \"2acb2036-b96d-4cdf-9a87-b90c4019a413\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.642815 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.667219 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=76.667183709 podStartE2EDuration="1m16.667183709s" podCreationTimestamp="2026-01-28 09:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:05.661185533 +0000 UTC m=+98.810849916" watchObservedRunningTime="2026-01-28 09:44:05.667183709 +0000 UTC m=+98.816848082" Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.675738 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:43:50.566650196 +0000 UTC Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.675832 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.683378 4735 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 09:44:05 crc kubenswrapper[4735]: I0128 09:44:05.693563 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=18.693546031 podStartE2EDuration="18.693546031s" podCreationTimestamp="2026-01-28 09:43:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:05.692470683 +0000 UTC m=+98.842135056" watchObservedRunningTime="2026-01-28 09:44:05.693546031 +0000 UTC m=+98.843210404" Jan 28 09:44:06 crc kubenswrapper[4735]: I0128 09:44:06.154341 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" event={"ID":"2acb2036-b96d-4cdf-9a87-b90c4019a413","Type":"ContainerStarted","Data":"969d128956d5442644d6c2c564c64f0172164de8ce09ff8b734902120c5704eb"} Jan 28 09:44:06 crc kubenswrapper[4735]: I0128 09:44:06.154440 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" event={"ID":"2acb2036-b96d-4cdf-9a87-b90c4019a413","Type":"ContainerStarted","Data":"ca117e8bd47c575b5d2f70f3a437126135032e84caddb8265b8be663300eb2eb"} Jan 28 09:44:06 crc kubenswrapper[4735]: I0128 09:44:06.176895 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-v7z9g" podStartSLOduration=74.176859595 podStartE2EDuration="1m14.176859595s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:06.174176035 +0000 UTC m=+99.323840418" watchObservedRunningTime="2026-01-28 09:44:06.176859595 +0000 UTC m=+99.326524008" Jan 28 09:44:06 crc kubenswrapper[4735]: I0128 09:44:06.502486 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:06 crc kubenswrapper[4735]: I0128 09:44:06.502524 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:06 crc kubenswrapper[4735]: I0128 09:44:06.502569 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:06 crc kubenswrapper[4735]: E0128 09:44:06.502692 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:06 crc kubenswrapper[4735]: I0128 09:44:06.502712 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:06 crc kubenswrapper[4735]: E0128 09:44:06.503072 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:06 crc kubenswrapper[4735]: E0128 09:44:06.503132 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:06 crc kubenswrapper[4735]: E0128 09:44:06.503259 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:08 crc kubenswrapper[4735]: I0128 09:44:08.501880 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:08 crc kubenswrapper[4735]: I0128 09:44:08.502022 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:08 crc kubenswrapper[4735]: I0128 09:44:08.502109 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:08 crc kubenswrapper[4735]: I0128 09:44:08.501931 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:08 crc kubenswrapper[4735]: E0128 09:44:08.502187 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:08 crc kubenswrapper[4735]: E0128 09:44:08.502369 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:08 crc kubenswrapper[4735]: E0128 09:44:08.502574 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:08 crc kubenswrapper[4735]: E0128 09:44:08.502698 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:10 crc kubenswrapper[4735]: I0128 09:44:10.502163 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:10 crc kubenswrapper[4735]: I0128 09:44:10.502166 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:10 crc kubenswrapper[4735]: I0128 09:44:10.502187 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:10 crc kubenswrapper[4735]: E0128 09:44:10.502503 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:10 crc kubenswrapper[4735]: E0128 09:44:10.502523 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:10 crc kubenswrapper[4735]: E0128 09:44:10.502571 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:10 crc kubenswrapper[4735]: I0128 09:44:10.502707 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:10 crc kubenswrapper[4735]: E0128 09:44:10.502957 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:10 crc kubenswrapper[4735]: I0128 09:44:10.963320 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs\") pod \"network-metrics-daemon-zcskf\" (UID: \"06fddb5b-5b1e-40de-9b21-e28092521208\") " pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:10 crc kubenswrapper[4735]: E0128 09:44:10.963724 4735 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 09:44:10 crc kubenswrapper[4735]: E0128 09:44:10.963894 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs podName:06fddb5b-5b1e-40de-9b21-e28092521208 nodeName:}" failed. No retries permitted until 2026-01-28 09:45:14.963855358 +0000 UTC m=+168.113519761 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs") pod "network-metrics-daemon-zcskf" (UID: "06fddb5b-5b1e-40de-9b21-e28092521208") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 09:44:12 crc kubenswrapper[4735]: I0128 09:44:12.502783 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:12 crc kubenswrapper[4735]: I0128 09:44:12.502855 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:12 crc kubenswrapper[4735]: I0128 09:44:12.502844 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:12 crc kubenswrapper[4735]: I0128 09:44:12.502835 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:12 crc kubenswrapper[4735]: E0128 09:44:12.503026 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:12 crc kubenswrapper[4735]: E0128 09:44:12.503123 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:12 crc kubenswrapper[4735]: E0128 09:44:12.503283 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:12 crc kubenswrapper[4735]: E0128 09:44:12.503352 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:14 crc kubenswrapper[4735]: I0128 09:44:14.502173 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:14 crc kubenswrapper[4735]: I0128 09:44:14.502173 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:14 crc kubenswrapper[4735]: I0128 09:44:14.502189 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:14 crc kubenswrapper[4735]: I0128 09:44:14.502487 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:14 crc kubenswrapper[4735]: E0128 09:44:14.502646 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:14 crc kubenswrapper[4735]: E0128 09:44:14.502804 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:14 crc kubenswrapper[4735]: E0128 09:44:14.503344 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:14 crc kubenswrapper[4735]: E0128 09:44:14.503561 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:14 crc kubenswrapper[4735]: I0128 09:44:14.503963 4735 scope.go:117] "RemoveContainer" containerID="e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061" Jan 28 09:44:14 crc kubenswrapper[4735]: E0128 09:44:14.504230 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6fwtd_openshift-ovn-kubernetes(9a0aebf3-b579-4254-9f8c-7163984836f9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" Jan 28 09:44:16 crc kubenswrapper[4735]: I0128 09:44:16.502095 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:16 crc kubenswrapper[4735]: I0128 09:44:16.502189 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:16 crc kubenswrapper[4735]: I0128 09:44:16.502205 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:16 crc kubenswrapper[4735]: I0128 09:44:16.502142 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:16 crc kubenswrapper[4735]: E0128 09:44:16.502326 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:16 crc kubenswrapper[4735]: E0128 09:44:16.502429 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:16 crc kubenswrapper[4735]: E0128 09:44:16.502534 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:16 crc kubenswrapper[4735]: E0128 09:44:16.502642 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:18 crc kubenswrapper[4735]: I0128 09:44:18.502182 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:18 crc kubenswrapper[4735]: I0128 09:44:18.502300 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:18 crc kubenswrapper[4735]: I0128 09:44:18.502336 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:18 crc kubenswrapper[4735]: I0128 09:44:18.502481 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:18 crc kubenswrapper[4735]: E0128 09:44:18.502697 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:18 crc kubenswrapper[4735]: E0128 09:44:18.502848 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:18 crc kubenswrapper[4735]: E0128 09:44:18.502960 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:18 crc kubenswrapper[4735]: E0128 09:44:18.503815 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:20 crc kubenswrapper[4735]: I0128 09:44:20.502332 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:20 crc kubenswrapper[4735]: I0128 09:44:20.502434 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:20 crc kubenswrapper[4735]: I0128 09:44:20.502432 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:20 crc kubenswrapper[4735]: I0128 09:44:20.502427 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:20 crc kubenswrapper[4735]: E0128 09:44:20.502509 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:20 crc kubenswrapper[4735]: E0128 09:44:20.502581 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:20 crc kubenswrapper[4735]: E0128 09:44:20.502733 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:20 crc kubenswrapper[4735]: E0128 09:44:20.502890 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:22 crc kubenswrapper[4735]: I0128 09:44:22.501987 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:22 crc kubenswrapper[4735]: I0128 09:44:22.502034 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:22 crc kubenswrapper[4735]: I0128 09:44:22.502102 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:22 crc kubenswrapper[4735]: I0128 09:44:22.502034 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:22 crc kubenswrapper[4735]: E0128 09:44:22.502259 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:22 crc kubenswrapper[4735]: E0128 09:44:22.502602 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:22 crc kubenswrapper[4735]: E0128 09:44:22.502857 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:22 crc kubenswrapper[4735]: E0128 09:44:22.503004 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:24 crc kubenswrapper[4735]: I0128 09:44:24.502329 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:24 crc kubenswrapper[4735]: I0128 09:44:24.502363 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:24 crc kubenswrapper[4735]: I0128 09:44:24.502409 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:24 crc kubenswrapper[4735]: E0128 09:44:24.502473 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:24 crc kubenswrapper[4735]: I0128 09:44:24.502502 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:24 crc kubenswrapper[4735]: E0128 09:44:24.502715 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:24 crc kubenswrapper[4735]: E0128 09:44:24.502841 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:24 crc kubenswrapper[4735]: E0128 09:44:24.503010 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:26 crc kubenswrapper[4735]: I0128 09:44:26.501860 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:26 crc kubenswrapper[4735]: I0128 09:44:26.501948 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:26 crc kubenswrapper[4735]: E0128 09:44:26.502006 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:26 crc kubenswrapper[4735]: I0128 09:44:26.502107 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:26 crc kubenswrapper[4735]: E0128 09:44:26.502395 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:26 crc kubenswrapper[4735]: I0128 09:44:26.502490 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:26 crc kubenswrapper[4735]: E0128 09:44:26.502710 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:26 crc kubenswrapper[4735]: E0128 09:44:26.502909 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:27 crc kubenswrapper[4735]: I0128 09:44:27.503401 4735 scope.go:117] "RemoveContainer" containerID="e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061" Jan 28 09:44:27 crc kubenswrapper[4735]: E0128 09:44:27.518437 4735 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 28 09:44:27 crc kubenswrapper[4735]: E0128 09:44:27.624545 4735 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 09:44:28 crc kubenswrapper[4735]: I0128 09:44:28.239232 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovnkube-controller/3.log" Jan 28 09:44:28 crc kubenswrapper[4735]: I0128 09:44:28.243028 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerStarted","Data":"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262"} Jan 28 09:44:28 crc kubenswrapper[4735]: I0128 09:44:28.243547 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:44:28 crc kubenswrapper[4735]: I0128 09:44:28.280209 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" podStartSLOduration=96.280189096 podStartE2EDuration="1m36.280189096s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:28.279037506 +0000 UTC m=+121.428701889" watchObservedRunningTime="2026-01-28 09:44:28.280189096 +0000 UTC m=+121.429853489" Jan 28 09:44:28 crc kubenswrapper[4735]: I0128 09:44:28.502568 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:28 crc kubenswrapper[4735]: I0128 09:44:28.502600 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:28 crc kubenswrapper[4735]: I0128 09:44:28.502604 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:28 crc kubenswrapper[4735]: E0128 09:44:28.502706 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:28 crc kubenswrapper[4735]: I0128 09:44:28.502830 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:28 crc kubenswrapper[4735]: E0128 09:44:28.502930 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:28 crc kubenswrapper[4735]: E0128 09:44:28.502950 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:28 crc kubenswrapper[4735]: E0128 09:44:28.502998 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:28 crc kubenswrapper[4735]: I0128 09:44:28.598335 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-zcskf"] Jan 28 09:44:29 crc kubenswrapper[4735]: I0128 09:44:29.247378 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jxbwx_c50beb21-41ab-474f-90d2-ba27152379cf/kube-multus/1.log" Jan 28 09:44:29 crc kubenswrapper[4735]: I0128 09:44:29.248675 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jxbwx_c50beb21-41ab-474f-90d2-ba27152379cf/kube-multus/0.log" Jan 28 09:44:29 crc kubenswrapper[4735]: I0128 09:44:29.248727 4735 generic.go:334] "Generic (PLEG): container finished" podID="c50beb21-41ab-474f-90d2-ba27152379cf" containerID="c6c98ea668fd6d902a3e7435fed58b93cc58a04bb99900782a0e943bc253a42d" exitCode=1 Jan 28 09:44:29 crc kubenswrapper[4735]: I0128 09:44:29.248778 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jxbwx" event={"ID":"c50beb21-41ab-474f-90d2-ba27152379cf","Type":"ContainerDied","Data":"c6c98ea668fd6d902a3e7435fed58b93cc58a04bb99900782a0e943bc253a42d"} Jan 28 09:44:29 crc kubenswrapper[4735]: I0128 09:44:29.248831 4735 scope.go:117] "RemoveContainer" containerID="3a164b78b1446598202981e4b12326019f419ffc6733f05f91a8479f8dea1021" Jan 28 09:44:29 crc kubenswrapper[4735]: I0128 09:44:29.248893 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:29 crc kubenswrapper[4735]: E0128 09:44:29.249240 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:29 crc kubenswrapper[4735]: I0128 09:44:29.249788 4735 scope.go:117] "RemoveContainer" containerID="c6c98ea668fd6d902a3e7435fed58b93cc58a04bb99900782a0e943bc253a42d" Jan 28 09:44:29 crc kubenswrapper[4735]: E0128 09:44:29.250042 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-jxbwx_openshift-multus(c50beb21-41ab-474f-90d2-ba27152379cf)\"" pod="openshift-multus/multus-jxbwx" podUID="c50beb21-41ab-474f-90d2-ba27152379cf" Jan 28 09:44:30 crc kubenswrapper[4735]: I0128 09:44:30.254514 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jxbwx_c50beb21-41ab-474f-90d2-ba27152379cf/kube-multus/1.log" Jan 28 09:44:30 crc kubenswrapper[4735]: I0128 09:44:30.502661 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:30 crc kubenswrapper[4735]: E0128 09:44:30.502831 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:30 crc kubenswrapper[4735]: I0128 09:44:30.502971 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:30 crc kubenswrapper[4735]: I0128 09:44:30.503042 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:30 crc kubenswrapper[4735]: I0128 09:44:30.502986 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:30 crc kubenswrapper[4735]: E0128 09:44:30.503155 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:30 crc kubenswrapper[4735]: E0128 09:44:30.503210 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:30 crc kubenswrapper[4735]: E0128 09:44:30.503292 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:32 crc kubenswrapper[4735]: I0128 09:44:32.502339 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:32 crc kubenswrapper[4735]: I0128 09:44:32.502450 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:32 crc kubenswrapper[4735]: I0128 09:44:32.502379 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:32 crc kubenswrapper[4735]: I0128 09:44:32.502363 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:32 crc kubenswrapper[4735]: E0128 09:44:32.502557 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:32 crc kubenswrapper[4735]: E0128 09:44:32.502725 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:32 crc kubenswrapper[4735]: E0128 09:44:32.502919 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:32 crc kubenswrapper[4735]: E0128 09:44:32.503058 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:32 crc kubenswrapper[4735]: E0128 09:44:32.627031 4735 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 09:44:34 crc kubenswrapper[4735]: I0128 09:44:34.501831 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:34 crc kubenswrapper[4735]: E0128 09:44:34.501965 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:34 crc kubenswrapper[4735]: I0128 09:44:34.501860 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:34 crc kubenswrapper[4735]: I0128 09:44:34.501831 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:34 crc kubenswrapper[4735]: E0128 09:44:34.502032 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:34 crc kubenswrapper[4735]: I0128 09:44:34.502397 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:34 crc kubenswrapper[4735]: E0128 09:44:34.502537 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:34 crc kubenswrapper[4735]: E0128 09:44:34.502704 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:36 crc kubenswrapper[4735]: I0128 09:44:36.502193 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:36 crc kubenswrapper[4735]: I0128 09:44:36.502252 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:36 crc kubenswrapper[4735]: I0128 09:44:36.502217 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:36 crc kubenswrapper[4735]: E0128 09:44:36.502386 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:36 crc kubenswrapper[4735]: E0128 09:44:36.502546 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:36 crc kubenswrapper[4735]: E0128 09:44:36.502656 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:36 crc kubenswrapper[4735]: I0128 09:44:36.503085 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:36 crc kubenswrapper[4735]: E0128 09:44:36.503293 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:37 crc kubenswrapper[4735]: E0128 09:44:37.628063 4735 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 09:44:38 crc kubenswrapper[4735]: I0128 09:44:38.502097 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:38 crc kubenswrapper[4735]: I0128 09:44:38.502159 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:38 crc kubenswrapper[4735]: I0128 09:44:38.502155 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:38 crc kubenswrapper[4735]: E0128 09:44:38.502291 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:38 crc kubenswrapper[4735]: I0128 09:44:38.502386 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:38 crc kubenswrapper[4735]: E0128 09:44:38.502442 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:38 crc kubenswrapper[4735]: E0128 09:44:38.502622 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:38 crc kubenswrapper[4735]: E0128 09:44:38.502713 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:39 crc kubenswrapper[4735]: I0128 09:44:39.502434 4735 scope.go:117] "RemoveContainer" containerID="c6c98ea668fd6d902a3e7435fed58b93cc58a04bb99900782a0e943bc253a42d" Jan 28 09:44:40 crc kubenswrapper[4735]: I0128 09:44:40.291124 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jxbwx_c50beb21-41ab-474f-90d2-ba27152379cf/kube-multus/1.log" Jan 28 09:44:40 crc kubenswrapper[4735]: I0128 09:44:40.292252 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jxbwx" event={"ID":"c50beb21-41ab-474f-90d2-ba27152379cf","Type":"ContainerStarted","Data":"3a9483e690e02a0014915fbf7e27f090be84b506983548028df8cf697a5112a6"} Jan 28 09:44:40 crc kubenswrapper[4735]: I0128 09:44:40.502471 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:40 crc kubenswrapper[4735]: I0128 09:44:40.502471 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:40 crc kubenswrapper[4735]: I0128 09:44:40.502598 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:40 crc kubenswrapper[4735]: I0128 09:44:40.502492 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:40 crc kubenswrapper[4735]: E0128 09:44:40.502677 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:40 crc kubenswrapper[4735]: E0128 09:44:40.502765 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:40 crc kubenswrapper[4735]: E0128 09:44:40.502848 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:40 crc kubenswrapper[4735]: E0128 09:44:40.503064 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:42 crc kubenswrapper[4735]: I0128 09:44:42.502338 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:42 crc kubenswrapper[4735]: E0128 09:44:42.502539 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zcskf" podUID="06fddb5b-5b1e-40de-9b21-e28092521208" Jan 28 09:44:42 crc kubenswrapper[4735]: I0128 09:44:42.502855 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:42 crc kubenswrapper[4735]: E0128 09:44:42.502944 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 09:44:42 crc kubenswrapper[4735]: I0128 09:44:42.503130 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:42 crc kubenswrapper[4735]: E0128 09:44:42.503203 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 09:44:42 crc kubenswrapper[4735]: I0128 09:44:42.503344 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:42 crc kubenswrapper[4735]: E0128 09:44:42.503410 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 09:44:44 crc kubenswrapper[4735]: I0128 09:44:44.501997 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:44 crc kubenswrapper[4735]: I0128 09:44:44.502053 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:44 crc kubenswrapper[4735]: I0128 09:44:44.502059 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:44:44 crc kubenswrapper[4735]: I0128 09:44:44.501995 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:44 crc kubenswrapper[4735]: I0128 09:44:44.507000 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 09:44:44 crc kubenswrapper[4735]: I0128 09:44:44.507024 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 09:44:44 crc kubenswrapper[4735]: I0128 09:44:44.507342 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 09:44:44 crc kubenswrapper[4735]: I0128 09:44:44.507351 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 09:44:44 crc kubenswrapper[4735]: I0128 09:44:44.507551 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 09:44:44 crc kubenswrapper[4735]: I0128 09:44:44.507613 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 09:44:45 crc kubenswrapper[4735]: I0128 09:44:45.423225 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.160561 4735 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.230013 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-knnrt"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.230652 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.231378 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wpjcc"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.232141 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.232268 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.234306 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.234904 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.235366 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.235426 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxk69"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.235921 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxk69" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.235997 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.236774 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.237601 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.238068 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.240495 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9mmgl"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.241493 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.241957 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.242202 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.243791 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.244397 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.245657 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.246793 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8rw4p"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.247445 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.249578 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.280836 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.282971 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.283294 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.283553 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.284325 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.284456 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.284645 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.284929 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.284949 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.285331 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.285501 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.285801 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.285923 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.285969 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.286151 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.286637 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.286848 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.286990 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.293740 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.293829 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.294008 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.294048 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.294063 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.294236 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.294248 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.294366 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.294397 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.294536 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.294561 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.294697 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.294710 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.294934 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.294985 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.295075 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.295114 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.295144 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.295210 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.295271 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.295427 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.294940 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.295930 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.296143 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.297430 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.298069 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.298139 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.298285 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.298326 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gkb25"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.298290 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.298994 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.299110 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.308593 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.308940 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.309561 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.312191 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.316028 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.317329 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.324839 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v8stz"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.325189 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gkb25" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.325323 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.325582 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-s5d86"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.325892 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v8stz" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.325900 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4qnqk"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.326015 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-s5d86" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.326473 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-8xlx7"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.326867 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8xlx7" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.327012 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.327069 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.327926 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-6dsdg"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.328049 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.328340 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.328384 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.328505 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.329274 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.329802 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.333258 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.333622 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.334257 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.335370 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.335539 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.335759 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.337001 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.337152 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.337327 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.337418 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.337479 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.337591 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.337799 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.338089 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.338332 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.338541 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.338675 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.338762 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.338808 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.337449 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.338898 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.338981 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.339558 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.339683 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.339912 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.340174 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.340284 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.341740 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.341726 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fhwqk"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.342715 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-85zqb"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.343019 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.343788 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.343805 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.358136 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.358331 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.358474 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.363736 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-c95cr"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.365736 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-c95cr" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.366611 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lb5dv"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.368499 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lb5dv" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.379643 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.382583 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.382960 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.383203 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.383386 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.383814 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.385720 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.386360 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.386864 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dk28j"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.387244 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dk28j" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.387316 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.387757 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.388526 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2cdfr"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.388791 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.388982 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2cdfr" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.389712 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.390252 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.390848 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.391401 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.395701 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cjnbj"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.396646 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cjnbj" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.397064 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2vkwm"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.397563 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vkwm" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.397675 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx4jv"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.398821 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx4jv" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400096 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-audit-dir\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400210 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-config\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400239 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a1b5d2f-fff3-4843-875e-001584dbbba1-config\") pod \"machine-approver-56656f9798-b9brv\" (UID: \"8a1b5d2f-fff3-4843-875e-001584dbbba1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400319 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e308a994-46ac-4ad4-9b44-f4e0cc910c81-serving-cert\") pod \"route-controller-manager-6576b87f9c-dnhz5\" (UID: \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400352 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66w8q\" (UniqueName: \"kubernetes.io/projected/7a91b319-14fa-4e1a-90f7-5d9ae9758ad1-kube-api-access-66w8q\") pod \"openshift-apiserver-operator-796bbdcf4f-wxk69\" (UID: \"7a91b319-14fa-4e1a-90f7-5d9ae9758ad1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxk69" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400376 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6rg8\" (UniqueName: \"kubernetes.io/projected/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-kube-api-access-k6rg8\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400401 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdwkj\" (UniqueName: \"kubernetes.io/projected/5f918d4c-80ee-435f-a363-f630e97965a0-kube-api-access-fdwkj\") pod \"authentication-operator-69f744f599-8rw4p\" (UID: \"5f918d4c-80ee-435f-a363-f630e97965a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400445 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-encryption-config\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400469 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-audit-policies\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400492 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f918d4c-80ee-435f-a363-f630e97965a0-config\") pod \"authentication-operator-69f744f599-8rw4p\" (UID: \"5f918d4c-80ee-435f-a363-f630e97965a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400521 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5d19be7-214e-4428-835a-5c325772fe6e-images\") pod \"machine-api-operator-5694c8668f-wpjcc\" (UID: \"c5d19be7-214e-4428-835a-5c325772fe6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400544 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-etcd-client\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400565 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-audit\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400592 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcd226c-c318-412c-9483-2f359c9a1b75-serving-cert\") pod \"controller-manager-879f6c89f-knnrt\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400614 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-serving-cert\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400637 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f918d4c-80ee-435f-a363-f630e97965a0-serving-cert\") pod \"authentication-operator-69f744f599-8rw4p\" (UID: \"5f918d4c-80ee-435f-a363-f630e97965a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400661 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlm42\" (UniqueName: \"kubernetes.io/projected/8a1b5d2f-fff3-4843-875e-001584dbbba1-kube-api-access-rlm42\") pod \"machine-approver-56656f9798-b9brv\" (UID: \"8a1b5d2f-fff3-4843-875e-001584dbbba1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400688 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-encryption-config\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.400735 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5693ded5-11d5-4a9e-9c3e-737d02dcec96-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vjfr8\" (UID: \"5693ded5-11d5-4a9e-9c3e-737d02dcec96\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.403723 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.404463 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pgmcm"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.405054 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pgmcm" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.405256 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.405387 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.406684 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a91b319-14fa-4e1a-90f7-5d9ae9758ad1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wxk69\" (UID: \"7a91b319-14fa-4e1a-90f7-5d9ae9758ad1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxk69" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.406732 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-config\") pod \"controller-manager-879f6c89f-knnrt\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.406772 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.406791 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e308a994-46ac-4ad4-9b44-f4e0cc910c81-client-ca\") pod \"route-controller-manager-6576b87f9c-dnhz5\" (UID: \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.406857 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-image-import-ca\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.406885 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-etcd-serving-ca\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.406909 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sldf\" (UniqueName: \"kubernetes.io/projected/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-kube-api-access-4sldf\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.406957 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-serving-cert\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.406977 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mbl8\" (UniqueName: \"kubernetes.io/projected/5693ded5-11d5-4a9e-9c3e-737d02dcec96-kube-api-access-5mbl8\") pod \"openshift-config-operator-7777fb866f-vjfr8\" (UID: \"5693ded5-11d5-4a9e-9c3e-737d02dcec96\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.406999 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-node-pullsecrets\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407016 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-audit-dir\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407033 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a91b319-14fa-4e1a-90f7-5d9ae9758ad1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wxk69\" (UID: \"7a91b319-14fa-4e1a-90f7-5d9ae9758ad1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxk69" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407054 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e308a994-46ac-4ad4-9b44-f4e0cc910c81-config\") pod \"route-controller-manager-6576b87f9c-dnhz5\" (UID: \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407288 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-knnrt\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407370 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c5d19be7-214e-4428-835a-5c325772fe6e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wpjcc\" (UID: \"c5d19be7-214e-4428-835a-5c325772fe6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407408 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407430 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw8qv\" (UniqueName: \"kubernetes.io/projected/8dcd226c-c318-412c-9483-2f359c9a1b75-kube-api-access-rw8qv\") pod \"controller-manager-879f6c89f-knnrt\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407450 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5693ded5-11d5-4a9e-9c3e-737d02dcec96-serving-cert\") pod \"openshift-config-operator-7777fb866f-vjfr8\" (UID: \"5693ded5-11d5-4a9e-9c3e-737d02dcec96\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407479 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-client-ca\") pod \"controller-manager-879f6c89f-knnrt\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407509 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-428gr\" (UniqueName: \"kubernetes.io/projected/c5d19be7-214e-4428-835a-5c325772fe6e-kube-api-access-428gr\") pod \"machine-api-operator-5694c8668f-wpjcc\" (UID: \"c5d19be7-214e-4428-835a-5c325772fe6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407529 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f918d4c-80ee-435f-a363-f630e97965a0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8rw4p\" (UID: \"5f918d4c-80ee-435f-a363-f630e97965a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407556 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f918d4c-80ee-435f-a363-f630e97965a0-service-ca-bundle\") pod \"authentication-operator-69f744f599-8rw4p\" (UID: \"5f918d4c-80ee-435f-a363-f630e97965a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407575 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a1b5d2f-fff3-4843-875e-001584dbbba1-auth-proxy-config\") pod \"machine-approver-56656f9798-b9brv\" (UID: \"8a1b5d2f-fff3-4843-875e-001584dbbba1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407592 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkwt5\" (UniqueName: \"kubernetes.io/projected/e308a994-46ac-4ad4-9b44-f4e0cc910c81-kube-api-access-dkwt5\") pod \"route-controller-manager-6576b87f9c-dnhz5\" (UID: \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407609 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5d19be7-214e-4428-835a-5c325772fe6e-config\") pod \"machine-api-operator-5694c8668f-wpjcc\" (UID: \"c5d19be7-214e-4428-835a-5c325772fe6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407650 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407668 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-etcd-client\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.407685 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8a1b5d2f-fff3-4843-875e-001584dbbba1-machine-approver-tls\") pod \"machine-approver-56656f9798-b9brv\" (UID: \"8a1b5d2f-fff3-4843-875e-001584dbbba1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.408864 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tkljg"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.409562 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.410652 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxgl"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.411121 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.416235 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.416950 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-zghzx"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.418289 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fzvkk"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.419001 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.422525 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.424168 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fzvkk" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.424348 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.425295 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wpjcc"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.425311 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.425323 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-knnrt"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.425334 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-djbr5"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.424487 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.425929 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.426006 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-djbr5" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.426623 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v8stz"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.427641 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.428806 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-s5d86"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.430587 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4qnqk"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.433911 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.434271 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dk28j"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.434705 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-85zqb"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.435530 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx4jv"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.452288 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8rw4p"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.452374 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.454133 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.457952 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2cdfr"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.459642 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.466580 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2vkwm"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.468231 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxk69"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.470690 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8xlx7"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.471779 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6dsdg"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.474793 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9mmgl"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.474858 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.476016 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fhwqk"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.480146 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-c95cr"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.481954 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2zhvr"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.483985 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.485912 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.487280 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cjnbj"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.488604 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tkljg"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.490735 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.496972 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.498013 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lb5dv"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.500367 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.502078 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxgl"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.507762 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pgmcm"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508311 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e308a994-46ac-4ad4-9b44-f4e0cc910c81-config\") pod \"route-controller-manager-6576b87f9c-dnhz5\" (UID: \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508347 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-knnrt\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508381 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84sck\" (UniqueName: \"kubernetes.io/projected/9c6798d5-2a74-4a6f-95dc-184b82db4a25-kube-api-access-84sck\") pod \"downloads-7954f5f757-8xlx7\" (UID: \"9c6798d5-2a74-4a6f-95dc-184b82db4a25\") " pod="openshift-console/downloads-7954f5f757-8xlx7" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508410 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c5d19be7-214e-4428-835a-5c325772fe6e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wpjcc\" (UID: \"c5d19be7-214e-4428-835a-5c325772fe6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508433 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508454 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw8qv\" (UniqueName: \"kubernetes.io/projected/8dcd226c-c318-412c-9483-2f359c9a1b75-kube-api-access-rw8qv\") pod \"controller-manager-879f6c89f-knnrt\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508488 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5693ded5-11d5-4a9e-9c3e-737d02dcec96-serving-cert\") pod \"openshift-config-operator-7777fb866f-vjfr8\" (UID: \"5693ded5-11d5-4a9e-9c3e-737d02dcec96\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508512 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-client-ca\") pod \"controller-manager-879f6c89f-knnrt\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508536 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-428gr\" (UniqueName: \"kubernetes.io/projected/c5d19be7-214e-4428-835a-5c325772fe6e-kube-api-access-428gr\") pod \"machine-api-operator-5694c8668f-wpjcc\" (UID: \"c5d19be7-214e-4428-835a-5c325772fe6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508560 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f918d4c-80ee-435f-a363-f630e97965a0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8rw4p\" (UID: \"5f918d4c-80ee-435f-a363-f630e97965a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508585 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f918d4c-80ee-435f-a363-f630e97965a0-service-ca-bundle\") pod \"authentication-operator-69f744f599-8rw4p\" (UID: \"5f918d4c-80ee-435f-a363-f630e97965a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508612 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5d19be7-214e-4428-835a-5c325772fe6e-config\") pod \"machine-api-operator-5694c8668f-wpjcc\" (UID: \"c5d19be7-214e-4428-835a-5c325772fe6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508637 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a1b5d2f-fff3-4843-875e-001584dbbba1-auth-proxy-config\") pod \"machine-approver-56656f9798-b9brv\" (UID: \"8a1b5d2f-fff3-4843-875e-001584dbbba1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508661 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkwt5\" (UniqueName: \"kubernetes.io/projected/e308a994-46ac-4ad4-9b44-f4e0cc910c81-kube-api-access-dkwt5\") pod \"route-controller-manager-6576b87f9c-dnhz5\" (UID: \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508698 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508721 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-etcd-client\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508771 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8a1b5d2f-fff3-4843-875e-001584dbbba1-machine-approver-tls\") pod \"machine-approver-56656f9798-b9brv\" (UID: \"8a1b5d2f-fff3-4843-875e-001584dbbba1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.508800 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-audit-dir\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.509971 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5d19be7-214e-4428-835a-5c325772fe6e-config\") pod \"machine-api-operator-5694c8668f-wpjcc\" (UID: \"c5d19be7-214e-4428-835a-5c325772fe6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.510118 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.510374 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-config\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.510437 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a1b5d2f-fff3-4843-875e-001584dbbba1-config\") pod \"machine-approver-56656f9798-b9brv\" (UID: \"8a1b5d2f-fff3-4843-875e-001584dbbba1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.510476 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e308a994-46ac-4ad4-9b44-f4e0cc910c81-serving-cert\") pod \"route-controller-manager-6576b87f9c-dnhz5\" (UID: \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.510556 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66w8q\" (UniqueName: \"kubernetes.io/projected/7a91b319-14fa-4e1a-90f7-5d9ae9758ad1-kube-api-access-66w8q\") pod \"openshift-apiserver-operator-796bbdcf4f-wxk69\" (UID: \"7a91b319-14fa-4e1a-90f7-5d9ae9758ad1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxk69" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.510591 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6rg8\" (UniqueName: \"kubernetes.io/projected/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-kube-api-access-k6rg8\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.510631 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdwkj\" (UniqueName: \"kubernetes.io/projected/5f918d4c-80ee-435f-a363-f630e97965a0-kube-api-access-fdwkj\") pod \"authentication-operator-69f744f599-8rw4p\" (UID: \"5f918d4c-80ee-435f-a363-f630e97965a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.510675 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-encryption-config\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.510710 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-audit-policies\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.510740 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f918d4c-80ee-435f-a363-f630e97965a0-config\") pod \"authentication-operator-69f744f599-8rw4p\" (UID: \"5f918d4c-80ee-435f-a363-f630e97965a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.510768 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-knnrt\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.510809 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5d19be7-214e-4428-835a-5c325772fe6e-images\") pod \"machine-api-operator-5694c8668f-wpjcc\" (UID: \"c5d19be7-214e-4428-835a-5c325772fe6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.510846 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-etcd-client\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.510889 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-audit\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.510909 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a1b5d2f-fff3-4843-875e-001584dbbba1-auth-proxy-config\") pod \"machine-approver-56656f9798-b9brv\" (UID: \"8a1b5d2f-fff3-4843-875e-001584dbbba1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.510928 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-audit-dir\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.510919 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcd226c-c318-412c-9483-2f359c9a1b75-serving-cert\") pod \"controller-manager-879f6c89f-knnrt\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511007 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-serving-cert\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511031 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f918d4c-80ee-435f-a363-f630e97965a0-serving-cert\") pod \"authentication-operator-69f744f599-8rw4p\" (UID: \"5f918d4c-80ee-435f-a363-f630e97965a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511055 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlm42\" (UniqueName: \"kubernetes.io/projected/8a1b5d2f-fff3-4843-875e-001584dbbba1-kube-api-access-rlm42\") pod \"machine-approver-56656f9798-b9brv\" (UID: \"8a1b5d2f-fff3-4843-875e-001584dbbba1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511082 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-encryption-config\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511107 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5693ded5-11d5-4a9e-9c3e-737d02dcec96-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vjfr8\" (UID: \"5693ded5-11d5-4a9e-9c3e-737d02dcec96\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511130 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a91b319-14fa-4e1a-90f7-5d9ae9758ad1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wxk69\" (UID: \"7a91b319-14fa-4e1a-90f7-5d9ae9758ad1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxk69" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511155 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-config\") pod \"controller-manager-879f6c89f-knnrt\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511209 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511227 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e308a994-46ac-4ad4-9b44-f4e0cc910c81-client-ca\") pod \"route-controller-manager-6576b87f9c-dnhz5\" (UID: \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511254 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-image-import-ca\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511301 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-etcd-serving-ca\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511323 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sldf\" (UniqueName: \"kubernetes.io/projected/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-kube-api-access-4sldf\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511343 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-serving-cert\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511358 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mbl8\" (UniqueName: \"kubernetes.io/projected/5693ded5-11d5-4a9e-9c3e-737d02dcec96-kube-api-access-5mbl8\") pod \"openshift-config-operator-7777fb866f-vjfr8\" (UID: \"5693ded5-11d5-4a9e-9c3e-737d02dcec96\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511378 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-node-pullsecrets\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511395 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-audit-dir\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511421 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a91b319-14fa-4e1a-90f7-5d9ae9758ad1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wxk69\" (UID: \"7a91b319-14fa-4e1a-90f7-5d9ae9758ad1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxk69" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511640 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-config\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.511701 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-client-ca\") pod \"controller-manager-879f6c89f-knnrt\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.512022 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gkb25"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.512203 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a91b319-14fa-4e1a-90f7-5d9ae9758ad1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wxk69\" (UID: \"7a91b319-14fa-4e1a-90f7-5d9ae9758ad1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxk69" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.512308 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a1b5d2f-fff3-4843-875e-001584dbbba1-config\") pod \"machine-approver-56656f9798-b9brv\" (UID: \"8a1b5d2f-fff3-4843-875e-001584dbbba1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.512804 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.512992 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e308a994-46ac-4ad4-9b44-f4e0cc910c81-client-ca\") pod \"route-controller-manager-6576b87f9c-dnhz5\" (UID: \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.513222 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5693ded5-11d5-4a9e-9c3e-737d02dcec96-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vjfr8\" (UID: \"5693ded5-11d5-4a9e-9c3e-737d02dcec96\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.513552 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e308a994-46ac-4ad4-9b44-f4e0cc910c81-config\") pod \"route-controller-manager-6576b87f9c-dnhz5\" (UID: \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.513622 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f918d4c-80ee-435f-a363-f630e97965a0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8rw4p\" (UID: \"5f918d4c-80ee-435f-a363-f630e97965a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.513996 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.514506 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-image-import-ca\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.514507 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f918d4c-80ee-435f-a363-f630e97965a0-service-ca-bundle\") pod \"authentication-operator-69f744f599-8rw4p\" (UID: \"5f918d4c-80ee-435f-a363-f630e97965a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.514589 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-node-pullsecrets\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.514627 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-audit-dir\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.514635 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.515658 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-etcd-serving-ca\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.516137 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5d19be7-214e-4428-835a-5c325772fe6e-images\") pod \"machine-api-operator-5694c8668f-wpjcc\" (UID: \"c5d19be7-214e-4428-835a-5c325772fe6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.516788 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-audit\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.516824 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2zhvr"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.516853 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5693ded5-11d5-4a9e-9c3e-737d02dcec96-serving-cert\") pod \"openshift-config-operator-7777fb866f-vjfr8\" (UID: \"5693ded5-11d5-4a9e-9c3e-737d02dcec96\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.517251 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-audit-policies\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.517364 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-etcd-client\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.517527 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-encryption-config\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.517970 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-config\") pod \"controller-manager-879f6c89f-knnrt\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.518225 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f918d4c-80ee-435f-a363-f630e97965a0-config\") pod \"authentication-operator-69f744f599-8rw4p\" (UID: \"5f918d4c-80ee-435f-a363-f630e97965a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.518574 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcd226c-c318-412c-9483-2f359c9a1b75-serving-cert\") pod \"controller-manager-879f6c89f-knnrt\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.518965 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c5d19be7-214e-4428-835a-5c325772fe6e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wpjcc\" (UID: \"c5d19be7-214e-4428-835a-5c325772fe6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.518979 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.519452 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-etcd-client\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.519794 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-encryption-config\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.520078 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.520525 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-serving-cert\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.521175 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fzvkk"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.523096 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-djbr5"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.525316 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e308a994-46ac-4ad4-9b44-f4e0cc910c81-serving-cert\") pod \"route-controller-manager-6576b87f9c-dnhz5\" (UID: \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.526811 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-6v6rm"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.527122 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-serving-cert\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.527392 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a91b319-14fa-4e1a-90f7-5d9ae9758ad1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wxk69\" (UID: \"7a91b319-14fa-4e1a-90f7-5d9ae9758ad1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxk69" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.528220 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6v6rm" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.535122 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fm4xr"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.535210 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f918d4c-80ee-435f-a363-f630e97965a0-serving-cert\") pod \"authentication-operator-69f744f599-8rw4p\" (UID: \"5f918d4c-80ee-435f-a363-f630e97965a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.533864 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.537380 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8a1b5d2f-fff3-4843-875e-001584dbbba1-machine-approver-tls\") pod \"machine-approver-56656f9798-b9brv\" (UID: \"8a1b5d2f-fff3-4843-875e-001584dbbba1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.537796 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fm4xr" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.539861 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6v6rm"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.541225 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-q62td"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.541909 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-q62td" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.543169 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-q62td"] Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.552780 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.573238 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.599674 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.612017 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84sck\" (UniqueName: \"kubernetes.io/projected/9c6798d5-2a74-4a6f-95dc-184b82db4a25-kube-api-access-84sck\") pod \"downloads-7954f5f757-8xlx7\" (UID: \"9c6798d5-2a74-4a6f-95dc-184b82db4a25\") " pod="openshift-console/downloads-7954f5f757-8xlx7" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.613889 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.633292 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.653288 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.673151 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.693869 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.713564 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.734473 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.753554 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.794734 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.814290 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.833882 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.853780 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.873545 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.894422 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.914326 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.933861 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.954249 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.974124 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 09:44:46 crc kubenswrapper[4735]: I0128 09:44:46.994219 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.014228 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.034096 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.054474 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.073247 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.093290 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.113631 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.134171 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.164561 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.174270 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.193032 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.214695 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.233713 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.252607 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.272935 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.292580 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.313783 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.334479 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.353045 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.374173 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.393731 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.412169 4735 request.go:700] Waited for 1.013556069s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&limit=500&resourceVersion=0 Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.413511 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.433231 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.454454 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.473950 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.494213 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.513901 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.533520 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.573667 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.594313 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.613038 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.644254 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.653908 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.673631 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.693643 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.714243 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.734877 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.754452 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.773360 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.793826 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.814235 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.833987 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.854287 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.873839 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.893868 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.913860 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.933643 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.953574 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.974640 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 09:44:47 crc kubenswrapper[4735]: I0128 09:44:47.994484 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.013878 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.034199 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.054223 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.072496 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.092909 4735 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.114039 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.149723 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw8qv\" (UniqueName: \"kubernetes.io/projected/8dcd226c-c318-412c-9483-2f359c9a1b75-kube-api-access-rw8qv\") pod \"controller-manager-879f6c89f-knnrt\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.168299 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-428gr\" (UniqueName: \"kubernetes.io/projected/c5d19be7-214e-4428-835a-5c325772fe6e-kube-api-access-428gr\") pod \"machine-api-operator-5694c8668f-wpjcc\" (UID: \"c5d19be7-214e-4428-835a-5c325772fe6e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.206854 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkwt5\" (UniqueName: \"kubernetes.io/projected/e308a994-46ac-4ad4-9b44-f4e0cc910c81-kube-api-access-dkwt5\") pod \"route-controller-manager-6576b87f9c-dnhz5\" (UID: \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.210782 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66w8q\" (UniqueName: \"kubernetes.io/projected/7a91b319-14fa-4e1a-90f7-5d9ae9758ad1-kube-api-access-66w8q\") pod \"openshift-apiserver-operator-796bbdcf4f-wxk69\" (UID: \"7a91b319-14fa-4e1a-90f7-5d9ae9758ad1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxk69" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.232785 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6rg8\" (UniqueName: \"kubernetes.io/projected/fe367f89-eb40-437c-b0e0-c7d9ac7dd376-kube-api-access-k6rg8\") pod \"apiserver-7bbb656c7d-vpwj9\" (UID: \"fe367f89-eb40-437c-b0e0-c7d9ac7dd376\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.249500 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdwkj\" (UniqueName: \"kubernetes.io/projected/5f918d4c-80ee-435f-a363-f630e97965a0-kube-api-access-fdwkj\") pod \"authentication-operator-69f744f599-8rw4p\" (UID: \"5f918d4c-80ee-435f-a363-f630e97965a0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.268649 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlm42\" (UniqueName: \"kubernetes.io/projected/8a1b5d2f-fff3-4843-875e-001584dbbba1-kube-api-access-rlm42\") pod \"machine-approver-56656f9798-b9brv\" (UID: \"8a1b5d2f-fff3-4843-875e-001584dbbba1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.293392 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sldf\" (UniqueName: \"kubernetes.io/projected/0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769-kube-api-access-4sldf\") pod \"apiserver-76f77b778f-9mmgl\" (UID: \"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769\") " pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.311184 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mbl8\" (UniqueName: \"kubernetes.io/projected/5693ded5-11d5-4a9e-9c3e-737d02dcec96-kube-api-access-5mbl8\") pod \"openshift-config-operator-7777fb866f-vjfr8\" (UID: \"5693ded5-11d5-4a9e-9c3e-737d02dcec96\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.313652 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.335636 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.352883 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.370779 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.375151 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.380703 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.394671 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.403341 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.412071 4735 request.go:700] Waited for 1.874012856s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.414551 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxk69" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.415357 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.429518 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.434349 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.454054 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.454580 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.475735 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.483274 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.495350 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.506606 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.525192 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.543244 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84sck\" (UniqueName: \"kubernetes.io/projected/9c6798d5-2a74-4a6f-95dc-184b82db4a25-kube-api-access-84sck\") pod \"downloads-7954f5f757-8xlx7\" (UID: \"9c6798d5-2a74-4a6f-95dc-184b82db4a25\") " pod="openshift-console/downloads-7954f5f757-8xlx7" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.577807 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8xlx7" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639285 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639349 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vl9j\" (UniqueName: \"kubernetes.io/projected/8e7b0527-d0de-4bc3-b59d-29b49799cad3-kube-api-access-8vl9j\") pod \"cluster-image-registry-operator-dc59b4c8b-x6nbm\" (UID: \"8e7b0527-d0de-4bc3-b59d-29b49799cad3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639386 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ba71cb2-abfc-47b7-9416-058006a9798a-srv-cert\") pod \"catalog-operator-68c6474976-c6gjz\" (UID: \"9ba71cb2-abfc-47b7-9416-058006a9798a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639411 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g77fq\" (UniqueName: \"kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-kube-api-access-g77fq\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639436 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-service-ca\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639470 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639490 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bf964fdc-5a3b-4f58-a73f-63d5af9f456e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lb5dv\" (UID: \"bf964fdc-5a3b-4f58-a73f-63d5af9f456e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lb5dv" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639522 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0a428da-f914-4463-8baf-899aed0920b1-secret-volume\") pod \"collect-profiles-29493210-sh45s\" (UID: \"e0a428da-f914-4463-8baf-899aed0920b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639544 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a005ba34-6174-4e23-9c80-aea87194684a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639567 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/448247d5-7682-47db-a6cf-60743d53bf92-etcd-client\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639629 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/868b9120-1a06-4845-8e31-4f9bbb6d5a76-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nx4jv\" (UID: \"868b9120-1a06-4845-8e31-4f9bbb6d5a76\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx4jv" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639655 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4311a6d0-765a-489b-91da-cb4e23ca2617-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qdv5c\" (UID: \"4311a6d0-765a-489b-91da-cb4e23ca2617\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639676 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9efa93c-e7ba-418e-b5f0-bd063e650ee8-serving-cert\") pod \"console-operator-58897d9998-s5d86\" (UID: \"c9efa93c-e7ba-418e-b5f0-bd063e650ee8\") " pod="openshift-console-operator/console-operator-58897d9998-s5d86" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639701 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4eaba190-637d-4895-8a03-de4b806c5062-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2cdfr\" (UID: \"4eaba190-637d-4895-8a03-de4b806c5062\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2cdfr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639724 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03935bf4-59fa-4f90-9307-450ea6b22983-config\") pod \"service-ca-operator-777779d784-2vkwm\" (UID: \"03935bf4-59fa-4f90-9307-450ea6b22983\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vkwm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639770 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvrf2\" (UniqueName: \"kubernetes.io/projected/267eadc3-080c-430c-82f2-0c64c71f1ea9-kube-api-access-pvrf2\") pod \"service-ca-9c57cc56f-cjnbj\" (UID: \"267eadc3-080c-430c-82f2-0c64c71f1ea9\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjnbj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639795 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e7b0527-d0de-4bc3-b59d-29b49799cad3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-x6nbm\" (UID: \"8e7b0527-d0de-4bc3-b59d-29b49799cad3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639820 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb03af8f-fc04-46b0-9f6e-d4700a872288-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7vrhj\" (UID: \"cb03af8f-fc04-46b0-9f6e-d4700a872288\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639845 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de09e0e4-b898-412b-a745-e53ddf521bb0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v8stz\" (UID: \"de09e0e4-b898-412b-a745-e53ddf521bb0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v8stz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639869 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-oauth-config\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639893 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cb03af8f-fc04-46b0-9f6e-d4700a872288-proxy-tls\") pod \"machine-config-operator-74547568cd-7vrhj\" (UID: \"cb03af8f-fc04-46b0-9f6e-d4700a872288\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639930 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbgzt\" (UniqueName: \"kubernetes.io/projected/bf964fdc-5a3b-4f58-a73f-63d5af9f456e-kube-api-access-qbgzt\") pod \"multus-admission-controller-857f4d67dd-lb5dv\" (UID: \"bf964fdc-5a3b-4f58-a73f-63d5af9f456e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lb5dv" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.639980 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-oauth-serving-cert\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640020 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8v2t\" (UniqueName: \"kubernetes.io/projected/868b9120-1a06-4845-8e31-4f9bbb6d5a76-kube-api-access-r8v2t\") pod \"package-server-manager-789f6589d5-nx4jv\" (UID: \"868b9120-1a06-4845-8e31-4f9bbb6d5a76\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx4jv" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640046 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl8gp\" (UniqueName: \"kubernetes.io/projected/a093e5d2-7fd4-488a-86ef-61bf9ae18728-kube-api-access-hl8gp\") pod \"dns-operator-744455d44c-c95cr\" (UID: \"a093e5d2-7fd4-488a-86ef-61bf9ae18728\") " pod="openshift-dns-operator/dns-operator-744455d44c-c95cr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640094 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640116 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f42071fc-81cc-463c-8648-bbf11d175271-metrics-tls\") pod \"ingress-operator-5b745b69d9-7vjbc\" (UID: \"f42071fc-81cc-463c-8648-bbf11d175271\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640139 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4311a6d0-765a-489b-91da-cb4e23ca2617-proxy-tls\") pod \"machine-config-controller-84d6567774-qdv5c\" (UID: \"4311a6d0-765a-489b-91da-cb4e23ca2617\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640178 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/448247d5-7682-47db-a6cf-60743d53bf92-serving-cert\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640201 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/448247d5-7682-47db-a6cf-60743d53bf92-etcd-ca\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640223 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-serving-cert\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640247 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a005ba34-6174-4e23-9c80-aea87194684a-registry-certificates\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640271 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtmbb\" (UniqueName: \"kubernetes.io/projected/139d3ad6-aaa8-4510-b5d1-6284f8433959-kube-api-access-wtmbb\") pod \"cluster-samples-operator-665b6dd947-gkb25\" (UID: \"139d3ad6-aaa8-4510-b5d1-6284f8433959\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gkb25" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640310 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwjpt\" (UniqueName: \"kubernetes.io/projected/9ba71cb2-abfc-47b7-9416-058006a9798a-kube-api-access-cwjpt\") pod \"catalog-operator-68c6474976-c6gjz\" (UID: \"9ba71cb2-abfc-47b7-9416-058006a9798a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640336 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnr5t\" (UniqueName: \"kubernetes.io/projected/de09e0e4-b898-412b-a745-e53ddf521bb0-kube-api-access-bnr5t\") pod \"openshift-controller-manager-operator-756b6f6bc6-v8stz\" (UID: \"de09e0e4-b898-412b-a745-e53ddf521bb0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v8stz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640360 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ae77ef-0b75-46bb-8eda-a99e9f12925c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dk28j\" (UID: \"18ae77ef-0b75-46bb-8eda-a99e9f12925c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dk28j" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640399 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a093e5d2-7fd4-488a-86ef-61bf9ae18728-metrics-tls\") pod \"dns-operator-744455d44c-c95cr\" (UID: \"a093e5d2-7fd4-488a-86ef-61bf9ae18728\") " pod="openshift-dns-operator/dns-operator-744455d44c-c95cr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640452 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0a428da-f914-4463-8baf-899aed0920b1-config-volume\") pod \"collect-profiles-29493210-sh45s\" (UID: \"e0a428da-f914-4463-8baf-899aed0920b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640479 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmhlj\" (UniqueName: \"kubernetes.io/projected/3c8d118a-3940-4efc-a7f3-16cb8de02990-kube-api-access-xmhlj\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640503 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e7b0527-d0de-4bc3-b59d-29b49799cad3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-x6nbm\" (UID: \"8e7b0527-d0de-4bc3-b59d-29b49799cad3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640532 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-registry-tls\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640548 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640567 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03935bf4-59fa-4f90-9307-450ea6b22983-serving-cert\") pod \"service-ca-operator-777779d784-2vkwm\" (UID: \"03935bf4-59fa-4f90-9307-450ea6b22983\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vkwm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640583 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640600 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640618 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7df03872-d3ee-4701-832a-1aaf25d48f15-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vbfc2\" (UID: \"7df03872-d3ee-4701-832a-1aaf25d48f15\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640678 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f42071fc-81cc-463c-8648-bbf11d175271-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7vjbc\" (UID: \"f42071fc-81cc-463c-8648-bbf11d175271\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640721 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18ae77ef-0b75-46bb-8eda-a99e9f12925c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dk28j\" (UID: \"18ae77ef-0b75-46bb-8eda-a99e9f12925c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dk28j" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640785 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxf9x\" (UniqueName: \"kubernetes.io/projected/c9efa93c-e7ba-418e-b5f0-bd063e650ee8-kube-api-access-fxf9x\") pod \"console-operator-58897d9998-s5d86\" (UID: \"c9efa93c-e7ba-418e-b5f0-bd063e650ee8\") " pod="openshift-console-operator/console-operator-58897d9998-s5d86" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640807 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-config\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640832 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/448247d5-7682-47db-a6cf-60743d53bf92-config\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640850 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-trusted-ca-bundle\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.640868 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c8d118a-3940-4efc-a7f3-16cb8de02990-audit-dir\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.641373 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a005ba34-6174-4e23-9c80-aea87194684a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.641395 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.641424 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2zb2\" (UniqueName: \"kubernetes.io/projected/f42071fc-81cc-463c-8648-bbf11d175271-kube-api-access-h2zb2\") pod \"ingress-operator-5b745b69d9-7vjbc\" (UID: \"f42071fc-81cc-463c-8648-bbf11d175271\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.641443 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqd5g\" (UniqueName: \"kubernetes.io/projected/11920696-6788-4b4a-9c6e-cbe4e3026be5-kube-api-access-pqd5g\") pod \"migrator-59844c95c7-pgmcm\" (UID: \"11920696-6788-4b4a-9c6e-cbe4e3026be5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pgmcm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.641468 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n945q\" (UniqueName: \"kubernetes.io/projected/bfc4463e-9f59-4e22-b40d-ebf47cc84969-kube-api-access-n945q\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.641707 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.641776 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e7b0527-d0de-4bc3-b59d-29b49799cad3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-x6nbm\" (UID: \"8e7b0527-d0de-4bc3-b59d-29b49799cad3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.641818 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eaba190-637d-4895-8a03-de4b806c5062-config\") pod \"kube-apiserver-operator-766d6c64bb-2cdfr\" (UID: \"4eaba190-637d-4895-8a03-de4b806c5062\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2cdfr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.641854 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eaba190-637d-4895-8a03-de4b806c5062-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2cdfr\" (UID: \"4eaba190-637d-4895-8a03-de4b806c5062\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2cdfr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.641881 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.641908 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/139d3ad6-aaa8-4510-b5d1-6284f8433959-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gkb25\" (UID: \"139d3ad6-aaa8-4510-b5d1-6284f8433959\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gkb25" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.641929 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7df03872-d3ee-4701-832a-1aaf25d48f15-srv-cert\") pod \"olm-operator-6b444d44fb-vbfc2\" (UID: \"7df03872-d3ee-4701-832a-1aaf25d48f15\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.641960 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brmvt\" (UniqueName: \"kubernetes.io/projected/03935bf4-59fa-4f90-9307-450ea6b22983-kube-api-access-brmvt\") pod \"service-ca-operator-777779d784-2vkwm\" (UID: \"03935bf4-59fa-4f90-9307-450ea6b22983\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vkwm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.641978 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-audit-policies\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.641994 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-bound-sa-token\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.642270 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r45n\" (UniqueName: \"kubernetes.io/projected/4311a6d0-765a-489b-91da-cb4e23ca2617-kube-api-access-4r45n\") pod \"machine-config-controller-84d6567774-qdv5c\" (UID: \"4311a6d0-765a-489b-91da-cb4e23ca2617\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.642309 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.642330 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/448247d5-7682-47db-a6cf-60743d53bf92-etcd-service-ca\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: E0128 09:44:48.642569 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:49.142539589 +0000 UTC m=+142.292203972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.642651 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de09e0e4-b898-412b-a745-e53ddf521bb0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v8stz\" (UID: \"de09e0e4-b898-412b-a745-e53ddf521bb0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v8stz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.642690 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7nq2\" (UniqueName: \"kubernetes.io/projected/7df03872-d3ee-4701-832a-1aaf25d48f15-kube-api-access-s7nq2\") pod \"olm-operator-6b444d44fb-vbfc2\" (UID: \"7df03872-d3ee-4701-832a-1aaf25d48f15\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.642740 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a005ba34-6174-4e23-9c80-aea87194684a-trusted-ca\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.642783 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/267eadc3-080c-430c-82f2-0c64c71f1ea9-signing-key\") pod \"service-ca-9c57cc56f-cjnbj\" (UID: \"267eadc3-080c-430c-82f2-0c64c71f1ea9\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjnbj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.642798 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/267eadc3-080c-430c-82f2-0c64c71f1ea9-signing-cabundle\") pod \"service-ca-9c57cc56f-cjnbj\" (UID: \"267eadc3-080c-430c-82f2-0c64c71f1ea9\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjnbj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.642814 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cb03af8f-fc04-46b0-9f6e-d4700a872288-images\") pod \"machine-config-operator-74547568cd-7vrhj\" (UID: \"cb03af8f-fc04-46b0-9f6e-d4700a872288\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.643372 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9efa93c-e7ba-418e-b5f0-bd063e650ee8-trusted-ca\") pod \"console-operator-58897d9998-s5d86\" (UID: \"c9efa93c-e7ba-418e-b5f0-bd063e650ee8\") " pod="openshift-console-operator/console-operator-58897d9998-s5d86" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.643392 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.643411 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f42071fc-81cc-463c-8648-bbf11d175271-trusted-ca\") pod \"ingress-operator-5b745b69d9-7vjbc\" (UID: \"f42071fc-81cc-463c-8648-bbf11d175271\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.643456 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ae77ef-0b75-46bb-8eda-a99e9f12925c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dk28j\" (UID: \"18ae77ef-0b75-46bb-8eda-a99e9f12925c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dk28j" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.643474 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.643505 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpjnm\" (UniqueName: \"kubernetes.io/projected/cb03af8f-fc04-46b0-9f6e-d4700a872288-kube-api-access-vpjnm\") pod \"machine-config-operator-74547568cd-7vrhj\" (UID: \"cb03af8f-fc04-46b0-9f6e-d4700a872288\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.643530 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9rqk\" (UniqueName: \"kubernetes.io/projected/e0a428da-f914-4463-8baf-899aed0920b1-kube-api-access-j9rqk\") pod \"collect-profiles-29493210-sh45s\" (UID: \"e0a428da-f914-4463-8baf-899aed0920b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.643556 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9efa93c-e7ba-418e-b5f0-bd063e650ee8-config\") pod \"console-operator-58897d9998-s5d86\" (UID: \"c9efa93c-e7ba-418e-b5f0-bd063e650ee8\") " pod="openshift-console-operator/console-operator-58897d9998-s5d86" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.643575 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ba71cb2-abfc-47b7-9416-058006a9798a-profile-collector-cert\") pod \"catalog-operator-68c6474976-c6gjz\" (UID: \"9ba71cb2-abfc-47b7-9416-058006a9798a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.643733 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f7gz\" (UniqueName: \"kubernetes.io/projected/448247d5-7682-47db-a6cf-60743d53bf92-kube-api-access-2f7gz\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.714853 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-knnrt"] Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.715936 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wpjcc"] Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.744498 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.744785 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9efa93c-e7ba-418e-b5f0-bd063e650ee8-trusted-ca\") pod \"console-operator-58897d9998-s5d86\" (UID: \"c9efa93c-e7ba-418e-b5f0-bd063e650ee8\") " pod="openshift-console-operator/console-operator-58897d9998-s5d86" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.744816 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.744834 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f42071fc-81cc-463c-8648-bbf11d175271-trusted-ca\") pod \"ingress-operator-5b745b69d9-7vjbc\" (UID: \"f42071fc-81cc-463c-8648-bbf11d175271\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.744858 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ae77ef-0b75-46bb-8eda-a99e9f12925c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dk28j\" (UID: \"18ae77ef-0b75-46bb-8eda-a99e9f12925c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dk28j" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.744880 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.744902 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpjnm\" (UniqueName: \"kubernetes.io/projected/cb03af8f-fc04-46b0-9f6e-d4700a872288-kube-api-access-vpjnm\") pod \"machine-config-operator-74547568cd-7vrhj\" (UID: \"cb03af8f-fc04-46b0-9f6e-d4700a872288\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.744924 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9rqk\" (UniqueName: \"kubernetes.io/projected/e0a428da-f914-4463-8baf-899aed0920b1-kube-api-access-j9rqk\") pod \"collect-profiles-29493210-sh45s\" (UID: \"e0a428da-f914-4463-8baf-899aed0920b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.744951 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pbgg\" (UniqueName: \"kubernetes.io/projected/4eae6a81-70da-45dd-88f9-bd270294b1a0-kube-api-access-8pbgg\") pod \"kube-storage-version-migrator-operator-b67b599dd-djbr5\" (UID: \"4eae6a81-70da-45dd-88f9-bd270294b1a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-djbr5" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.744971 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9efa93c-e7ba-418e-b5f0-bd063e650ee8-config\") pod \"console-operator-58897d9998-s5d86\" (UID: \"c9efa93c-e7ba-418e-b5f0-bd063e650ee8\") " pod="openshift-console-operator/console-operator-58897d9998-s5d86" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.744992 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ba71cb2-abfc-47b7-9416-058006a9798a-profile-collector-cert\") pod \"catalog-operator-68c6474976-c6gjz\" (UID: \"9ba71cb2-abfc-47b7-9416-058006a9798a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745039 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f7gz\" (UniqueName: \"kubernetes.io/projected/448247d5-7682-47db-a6cf-60743d53bf92-kube-api-access-2f7gz\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745058 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745078 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vl9j\" (UniqueName: \"kubernetes.io/projected/8e7b0527-d0de-4bc3-b59d-29b49799cad3-kube-api-access-8vl9j\") pod \"cluster-image-registry-operator-dc59b4c8b-x6nbm\" (UID: \"8e7b0527-d0de-4bc3-b59d-29b49799cad3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745101 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh7jw\" (UniqueName: \"kubernetes.io/projected/79fa6cc2-925b-4ae3-b774-5399ff66fa5b-kube-api-access-mh7jw\") pod \"machine-config-server-fm4xr\" (UID: \"79fa6cc2-925b-4ae3-b774-5399ff66fa5b\") " pod="openshift-machine-config-operator/machine-config-server-fm4xr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745121 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2mlf\" (UniqueName: \"kubernetes.io/projected/432a29bc-0436-4d07-bb40-25bb08020746-kube-api-access-g2mlf\") pod \"control-plane-machine-set-operator-78cbb6b69f-fzvkk\" (UID: \"432a29bc-0436-4d07-bb40-25bb08020746\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fzvkk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745142 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ba71cb2-abfc-47b7-9416-058006a9798a-srv-cert\") pod \"catalog-operator-68c6474976-c6gjz\" (UID: \"9ba71cb2-abfc-47b7-9416-058006a9798a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745161 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g77fq\" (UniqueName: \"kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-kube-api-access-g77fq\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745178 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-service-ca\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745197 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745214 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bf964fdc-5a3b-4f58-a73f-63d5af9f456e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lb5dv\" (UID: \"bf964fdc-5a3b-4f58-a73f-63d5af9f456e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lb5dv" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745252 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c2380f3e-65de-4df8-b674-a4c9b16d7218-tmpfs\") pod \"packageserver-d55dfcdfc-vrdhn\" (UID: \"c2380f3e-65de-4df8-b674-a4c9b16d7218\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745273 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d75bce32-7014-4fc3-8721-c8a9f231455f-cert\") pod \"ingress-canary-6v6rm\" (UID: \"d75bce32-7014-4fc3-8721-c8a9f231455f\") " pod="openshift-ingress-canary/ingress-canary-6v6rm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745292 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0a428da-f914-4463-8baf-899aed0920b1-secret-volume\") pod \"collect-profiles-29493210-sh45s\" (UID: \"e0a428da-f914-4463-8baf-899aed0920b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745321 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38416160-6197-4351-89dd-65bc4289993b-metrics-certs\") pod \"router-default-5444994796-zghzx\" (UID: \"38416160-6197-4351-89dd-65bc4289993b\") " pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745349 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a005ba34-6174-4e23-9c80-aea87194684a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745371 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/448247d5-7682-47db-a6cf-60743d53bf92-etcd-client\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745399 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/868b9120-1a06-4845-8e31-4f9bbb6d5a76-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nx4jv\" (UID: \"868b9120-1a06-4845-8e31-4f9bbb6d5a76\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx4jv" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745424 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4311a6d0-765a-489b-91da-cb4e23ca2617-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qdv5c\" (UID: \"4311a6d0-765a-489b-91da-cb4e23ca2617\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745447 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9efa93c-e7ba-418e-b5f0-bd063e650ee8-serving-cert\") pod \"console-operator-58897d9998-s5d86\" (UID: \"c9efa93c-e7ba-418e-b5f0-bd063e650ee8\") " pod="openshift-console-operator/console-operator-58897d9998-s5d86" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745470 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4eaba190-637d-4895-8a03-de4b806c5062-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2cdfr\" (UID: \"4eaba190-637d-4895-8a03-de4b806c5062\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2cdfr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745486 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03935bf4-59fa-4f90-9307-450ea6b22983-config\") pod \"service-ca-operator-777779d784-2vkwm\" (UID: \"03935bf4-59fa-4f90-9307-450ea6b22983\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vkwm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745504 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/432a29bc-0436-4d07-bb40-25bb08020746-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fzvkk\" (UID: \"432a29bc-0436-4d07-bb40-25bb08020746\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fzvkk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745521 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvrf2\" (UniqueName: \"kubernetes.io/projected/267eadc3-080c-430c-82f2-0c64c71f1ea9-kube-api-access-pvrf2\") pod \"service-ca-9c57cc56f-cjnbj\" (UID: \"267eadc3-080c-430c-82f2-0c64c71f1ea9\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjnbj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745540 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e7b0527-d0de-4bc3-b59d-29b49799cad3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-x6nbm\" (UID: \"8e7b0527-d0de-4bc3-b59d-29b49799cad3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745555 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb03af8f-fc04-46b0-9f6e-d4700a872288-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7vrhj\" (UID: \"cb03af8f-fc04-46b0-9f6e-d4700a872288\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745572 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjnbb\" (UniqueName: \"kubernetes.io/projected/af4b340f-065b-432e-a0be-d19647b019cb-kube-api-access-fjnbb\") pod \"dns-default-q62td\" (UID: \"af4b340f-065b-432e-a0be-d19647b019cb\") " pod="openshift-dns/dns-default-q62td" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745590 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de09e0e4-b898-412b-a745-e53ddf521bb0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v8stz\" (UID: \"de09e0e4-b898-412b-a745-e53ddf521bb0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v8stz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745607 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-oauth-config\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745625 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cb03af8f-fc04-46b0-9f6e-d4700a872288-proxy-tls\") pod \"machine-config-operator-74547568cd-7vrhj\" (UID: \"cb03af8f-fc04-46b0-9f6e-d4700a872288\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745644 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbgzt\" (UniqueName: \"kubernetes.io/projected/bf964fdc-5a3b-4f58-a73f-63d5af9f456e-kube-api-access-qbgzt\") pod \"multus-admission-controller-857f4d67dd-lb5dv\" (UID: \"bf964fdc-5a3b-4f58-a73f-63d5af9f456e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lb5dv" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745665 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gbwg\" (UniqueName: \"kubernetes.io/projected/38416160-6197-4351-89dd-65bc4289993b-kube-api-access-8gbwg\") pod \"router-default-5444994796-zghzx\" (UID: \"38416160-6197-4351-89dd-65bc4289993b\") " pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745681 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-oauth-serving-cert\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745701 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8v2t\" (UniqueName: \"kubernetes.io/projected/868b9120-1a06-4845-8e31-4f9bbb6d5a76-kube-api-access-r8v2t\") pod \"package-server-manager-789f6589d5-nx4jv\" (UID: \"868b9120-1a06-4845-8e31-4f9bbb6d5a76\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx4jv" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745718 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl8gp\" (UniqueName: \"kubernetes.io/projected/a093e5d2-7fd4-488a-86ef-61bf9ae18728-kube-api-access-hl8gp\") pod \"dns-operator-744455d44c-c95cr\" (UID: \"a093e5d2-7fd4-488a-86ef-61bf9ae18728\") " pod="openshift-dns-operator/dns-operator-744455d44c-c95cr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745734 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/79fa6cc2-925b-4ae3-b774-5399ff66fa5b-node-bootstrap-token\") pod \"machine-config-server-fm4xr\" (UID: \"79fa6cc2-925b-4ae3-b774-5399ff66fa5b\") " pod="openshift-machine-config-operator/machine-config-server-fm4xr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745821 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745844 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f42071fc-81cc-463c-8648-bbf11d175271-metrics-tls\") pod \"ingress-operator-5b745b69d9-7vjbc\" (UID: \"f42071fc-81cc-463c-8648-bbf11d175271\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745863 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4311a6d0-765a-489b-91da-cb4e23ca2617-proxy-tls\") pod \"machine-config-controller-84d6567774-qdv5c\" (UID: \"4311a6d0-765a-489b-91da-cb4e23ca2617\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745880 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/448247d5-7682-47db-a6cf-60743d53bf92-serving-cert\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745896 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/448247d5-7682-47db-a6cf-60743d53bf92-etcd-ca\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745918 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-serving-cert\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745944 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a005ba34-6174-4e23-9c80-aea87194684a-registry-certificates\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745965 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtmbb\" (UniqueName: \"kubernetes.io/projected/139d3ad6-aaa8-4510-b5d1-6284f8433959-kube-api-access-wtmbb\") pod \"cluster-samples-operator-665b6dd947-gkb25\" (UID: \"139d3ad6-aaa8-4510-b5d1-6284f8433959\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gkb25" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.745987 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwjpt\" (UniqueName: \"kubernetes.io/projected/9ba71cb2-abfc-47b7-9416-058006a9798a-kube-api-access-cwjpt\") pod \"catalog-operator-68c6474976-c6gjz\" (UID: \"9ba71cb2-abfc-47b7-9416-058006a9798a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746004 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnr5t\" (UniqueName: \"kubernetes.io/projected/de09e0e4-b898-412b-a745-e53ddf521bb0-kube-api-access-bnr5t\") pod \"openshift-controller-manager-operator-756b6f6bc6-v8stz\" (UID: \"de09e0e4-b898-412b-a745-e53ddf521bb0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v8stz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746019 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ae77ef-0b75-46bb-8eda-a99e9f12925c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dk28j\" (UID: \"18ae77ef-0b75-46bb-8eda-a99e9f12925c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dk28j" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746045 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a093e5d2-7fd4-488a-86ef-61bf9ae18728-metrics-tls\") pod \"dns-operator-744455d44c-c95cr\" (UID: \"a093e5d2-7fd4-488a-86ef-61bf9ae18728\") " pod="openshift-dns-operator/dns-operator-744455d44c-c95cr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746062 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eae6a81-70da-45dd-88f9-bd270294b1a0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-djbr5\" (UID: \"4eae6a81-70da-45dd-88f9-bd270294b1a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-djbr5" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746080 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0a428da-f914-4463-8baf-899aed0920b1-config-volume\") pod \"collect-profiles-29493210-sh45s\" (UID: \"e0a428da-f914-4463-8baf-899aed0920b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746096 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tkljg\" (UID: \"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746112 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tkljg\" (UID: \"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746129 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmhlj\" (UniqueName: \"kubernetes.io/projected/3c8d118a-3940-4efc-a7f3-16cb8de02990-kube-api-access-xmhlj\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746150 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e7b0527-d0de-4bc3-b59d-29b49799cad3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-x6nbm\" (UID: \"8e7b0527-d0de-4bc3-b59d-29b49799cad3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746173 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b4340feb-9a36-45f1-8a67-8df7b8040cf0-plugins-dir\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746193 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ff879b4-32b1-430b-bf3c-9f020c76ba0f-config\") pod \"kube-controller-manager-operator-78b949d7b-vsxgl\" (UID: \"5ff879b4-32b1-430b-bf3c-9f020c76ba0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxgl" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746211 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-registry-tls\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746229 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746247 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4btx\" (UniqueName: \"kubernetes.io/projected/b4340feb-9a36-45f1-8a67-8df7b8040cf0-kube-api-access-z4btx\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746265 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03935bf4-59fa-4f90-9307-450ea6b22983-serving-cert\") pod \"service-ca-operator-777779d784-2vkwm\" (UID: \"03935bf4-59fa-4f90-9307-450ea6b22983\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vkwm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746286 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746304 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54qw2\" (UniqueName: \"kubernetes.io/projected/c2380f3e-65de-4df8-b674-a4c9b16d7218-kube-api-access-54qw2\") pod \"packageserver-d55dfcdfc-vrdhn\" (UID: \"c2380f3e-65de-4df8-b674-a4c9b16d7218\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746323 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38416160-6197-4351-89dd-65bc4289993b-service-ca-bundle\") pod \"router-default-5444994796-zghzx\" (UID: \"38416160-6197-4351-89dd-65bc4289993b\") " pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746340 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af4b340f-065b-432e-a0be-d19647b019cb-config-volume\") pod \"dns-default-q62td\" (UID: \"af4b340f-065b-432e-a0be-d19647b019cb\") " pod="openshift-dns/dns-default-q62td" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746364 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746393 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7df03872-d3ee-4701-832a-1aaf25d48f15-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vbfc2\" (UID: \"7df03872-d3ee-4701-832a-1aaf25d48f15\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746415 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f42071fc-81cc-463c-8648-bbf11d175271-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7vjbc\" (UID: \"f42071fc-81cc-463c-8648-bbf11d175271\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746431 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18ae77ef-0b75-46bb-8eda-a99e9f12925c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dk28j\" (UID: \"18ae77ef-0b75-46bb-8eda-a99e9f12925c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dk28j" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746450 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxf9x\" (UniqueName: \"kubernetes.io/projected/c9efa93c-e7ba-418e-b5f0-bd063e650ee8-kube-api-access-fxf9x\") pod \"console-operator-58897d9998-s5d86\" (UID: \"c9efa93c-e7ba-418e-b5f0-bd063e650ee8\") " pod="openshift-console-operator/console-operator-58897d9998-s5d86" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746467 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-config\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746485 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/79fa6cc2-925b-4ae3-b774-5399ff66fa5b-certs\") pod \"machine-config-server-fm4xr\" (UID: \"79fa6cc2-925b-4ae3-b774-5399ff66fa5b\") " pod="openshift-machine-config-operator/machine-config-server-fm4xr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746487 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9efa93c-e7ba-418e-b5f0-bd063e650ee8-config\") pod \"console-operator-58897d9998-s5d86\" (UID: \"c9efa93c-e7ba-418e-b5f0-bd063e650ee8\") " pod="openshift-console-operator/console-operator-58897d9998-s5d86" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746506 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/448247d5-7682-47db-a6cf-60743d53bf92-config\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746582 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-trusted-ca-bundle\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746604 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c8d118a-3940-4efc-a7f3-16cb8de02990-audit-dir\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746639 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ff879b4-32b1-430b-bf3c-9f020c76ba0f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vsxgl\" (UID: \"5ff879b4-32b1-430b-bf3c-9f020c76ba0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxgl" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746663 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a005ba34-6174-4e23-9c80-aea87194684a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746686 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746708 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eae6a81-70da-45dd-88f9-bd270294b1a0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-djbr5\" (UID: \"4eae6a81-70da-45dd-88f9-bd270294b1a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-djbr5" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746735 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2zb2\" (UniqueName: \"kubernetes.io/projected/f42071fc-81cc-463c-8648-bbf11d175271-kube-api-access-h2zb2\") pod \"ingress-operator-5b745b69d9-7vjbc\" (UID: \"f42071fc-81cc-463c-8648-bbf11d175271\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746797 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqd5g\" (UniqueName: \"kubernetes.io/projected/11920696-6788-4b4a-9c6e-cbe4e3026be5-kube-api-access-pqd5g\") pod \"migrator-59844c95c7-pgmcm\" (UID: \"11920696-6788-4b4a-9c6e-cbe4e3026be5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pgmcm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746818 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b4340feb-9a36-45f1-8a67-8df7b8040cf0-csi-data-dir\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746863 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n945q\" (UniqueName: \"kubernetes.io/projected/bfc4463e-9f59-4e22-b40d-ebf47cc84969-kube-api-access-n945q\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746885 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2380f3e-65de-4df8-b674-a4c9b16d7218-apiservice-cert\") pod \"packageserver-d55dfcdfc-vrdhn\" (UID: \"c2380f3e-65de-4df8-b674-a4c9b16d7218\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746947 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e7b0527-d0de-4bc3-b59d-29b49799cad3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-x6nbm\" (UID: \"8e7b0527-d0de-4bc3-b59d-29b49799cad3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746971 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eaba190-637d-4895-8a03-de4b806c5062-config\") pod \"kube-apiserver-operator-766d6c64bb-2cdfr\" (UID: \"4eaba190-637d-4895-8a03-de4b806c5062\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2cdfr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.746992 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eaba190-637d-4895-8a03-de4b806c5062-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2cdfr\" (UID: \"4eaba190-637d-4895-8a03-de4b806c5062\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2cdfr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747020 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747039 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/38416160-6197-4351-89dd-65bc4289993b-default-certificate\") pod \"router-default-5444994796-zghzx\" (UID: \"38416160-6197-4351-89dd-65bc4289993b\") " pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747060 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/139d3ad6-aaa8-4510-b5d1-6284f8433959-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gkb25\" (UID: \"139d3ad6-aaa8-4510-b5d1-6284f8433959\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gkb25" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747082 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7df03872-d3ee-4701-832a-1aaf25d48f15-srv-cert\") pod \"olm-operator-6b444d44fb-vbfc2\" (UID: \"7df03872-d3ee-4701-832a-1aaf25d48f15\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747101 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff879b4-32b1-430b-bf3c-9f020c76ba0f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vsxgl\" (UID: \"5ff879b4-32b1-430b-bf3c-9f020c76ba0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxgl" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747126 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brmvt\" (UniqueName: \"kubernetes.io/projected/03935bf4-59fa-4f90-9307-450ea6b22983-kube-api-access-brmvt\") pod \"service-ca-operator-777779d784-2vkwm\" (UID: \"03935bf4-59fa-4f90-9307-450ea6b22983\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vkwm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747131 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/448247d5-7682-47db-a6cf-60743d53bf92-config\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747155 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-audit-policies\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747201 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-bound-sa-token\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747238 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r45n\" (UniqueName: \"kubernetes.io/projected/4311a6d0-765a-489b-91da-cb4e23ca2617-kube-api-access-4r45n\") pod \"machine-config-controller-84d6567774-qdv5c\" (UID: \"4311a6d0-765a-489b-91da-cb4e23ca2617\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747265 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b4340feb-9a36-45f1-8a67-8df7b8040cf0-mountpoint-dir\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747290 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dljl2\" (UniqueName: \"kubernetes.io/projected/d75bce32-7014-4fc3-8721-c8a9f231455f-kube-api-access-dljl2\") pod \"ingress-canary-6v6rm\" (UID: \"d75bce32-7014-4fc3-8721-c8a9f231455f\") " pod="openshift-ingress-canary/ingress-canary-6v6rm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747334 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747362 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/448247d5-7682-47db-a6cf-60743d53bf92-etcd-service-ca\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747386 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2380f3e-65de-4df8-b674-a4c9b16d7218-webhook-cert\") pod \"packageserver-d55dfcdfc-vrdhn\" (UID: \"c2380f3e-65de-4df8-b674-a4c9b16d7218\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747409 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzqgh\" (UniqueName: \"kubernetes.io/projected/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-kube-api-access-fzqgh\") pod \"marketplace-operator-79b997595-tkljg\" (UID: \"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747438 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de09e0e4-b898-412b-a745-e53ddf521bb0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v8stz\" (UID: \"de09e0e4-b898-412b-a745-e53ddf521bb0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v8stz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747464 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7nq2\" (UniqueName: \"kubernetes.io/projected/7df03872-d3ee-4701-832a-1aaf25d48f15-kube-api-access-s7nq2\") pod \"olm-operator-6b444d44fb-vbfc2\" (UID: \"7df03872-d3ee-4701-832a-1aaf25d48f15\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747486 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/38416160-6197-4351-89dd-65bc4289993b-stats-auth\") pod \"router-default-5444994796-zghzx\" (UID: \"38416160-6197-4351-89dd-65bc4289993b\") " pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747509 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a005ba34-6174-4e23-9c80-aea87194684a-trusted-ca\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747532 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/267eadc3-080c-430c-82f2-0c64c71f1ea9-signing-key\") pod \"service-ca-9c57cc56f-cjnbj\" (UID: \"267eadc3-080c-430c-82f2-0c64c71f1ea9\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjnbj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747552 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/267eadc3-080c-430c-82f2-0c64c71f1ea9-signing-cabundle\") pod \"service-ca-9c57cc56f-cjnbj\" (UID: \"267eadc3-080c-430c-82f2-0c64c71f1ea9\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjnbj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747575 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cb03af8f-fc04-46b0-9f6e-d4700a872288-images\") pod \"machine-config-operator-74547568cd-7vrhj\" (UID: \"cb03af8f-fc04-46b0-9f6e-d4700a872288\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747599 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b4340feb-9a36-45f1-8a67-8df7b8040cf0-socket-dir\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747622 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b4340feb-9a36-45f1-8a67-8df7b8040cf0-registration-dir\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747643 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af4b340f-065b-432e-a0be-d19647b019cb-metrics-tls\") pod \"dns-default-q62td\" (UID: \"af4b340f-065b-432e-a0be-d19647b019cb\") " pod="openshift-dns/dns-default-q62td" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.747655 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-audit-policies\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: E0128 09:44:48.749470 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:49.249448735 +0000 UTC m=+142.399113268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.751665 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ae77ef-0b75-46bb-8eda-a99e9f12925c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dk28j\" (UID: \"18ae77ef-0b75-46bb-8eda-a99e9f12925c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dk28j" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.753995 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.754388 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f42071fc-81cc-463c-8648-bbf11d175271-trusted-ca\") pod \"ingress-operator-5b745b69d9-7vjbc\" (UID: \"f42071fc-81cc-463c-8648-bbf11d175271\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.755328 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ba71cb2-abfc-47b7-9416-058006a9798a-profile-collector-cert\") pod \"catalog-operator-68c6474976-c6gjz\" (UID: \"9ba71cb2-abfc-47b7-9416-058006a9798a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.755396 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c8d118a-3940-4efc-a7f3-16cb8de02990-audit-dir\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.756439 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0a428da-f914-4463-8baf-899aed0920b1-config-volume\") pod \"collect-profiles-29493210-sh45s\" (UID: \"e0a428da-f914-4463-8baf-899aed0920b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.756673 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-trusted-ca-bundle\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.756729 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.758219 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-config\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.758399 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a005ba34-6174-4e23-9c80-aea87194684a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.758411 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.759859 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de09e0e4-b898-412b-a745-e53ddf521bb0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v8stz\" (UID: \"de09e0e4-b898-412b-a745-e53ddf521bb0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v8stz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.761929 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ae77ef-0b75-46bb-8eda-a99e9f12925c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dk28j\" (UID: \"18ae77ef-0b75-46bb-8eda-a99e9f12925c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dk28j" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.762249 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a005ba34-6174-4e23-9c80-aea87194684a-trusted-ca\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.763407 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.783059 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9efa93c-e7ba-418e-b5f0-bd063e650ee8-trusted-ca\") pod \"console-operator-58897d9998-s5d86\" (UID: \"c9efa93c-e7ba-418e-b5f0-bd063e650ee8\") " pod="openshift-console-operator/console-operator-58897d9998-s5d86" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.783350 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-serving-cert\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.787063 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03935bf4-59fa-4f90-9307-450ea6b22983-config\") pod \"service-ca-operator-777779d784-2vkwm\" (UID: \"03935bf4-59fa-4f90-9307-450ea6b22983\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vkwm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.788442 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03935bf4-59fa-4f90-9307-450ea6b22983-serving-cert\") pod \"service-ca-operator-777779d784-2vkwm\" (UID: \"03935bf4-59fa-4f90-9307-450ea6b22983\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vkwm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.789913 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-oauth-serving-cert\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.790552 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-oauth-config\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.795009 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-service-ca\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.796799 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a005ba34-6174-4e23-9c80-aea87194684a-registry-certificates\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.797995 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a005ba34-6174-4e23-9c80-aea87194684a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.800409 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/139d3ad6-aaa8-4510-b5d1-6284f8433959-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gkb25\" (UID: \"139d3ad6-aaa8-4510-b5d1-6284f8433959\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gkb25" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.801109 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eaba190-637d-4895-8a03-de4b806c5062-config\") pod \"kube-apiserver-operator-766d6c64bb-2cdfr\" (UID: \"4eaba190-637d-4895-8a03-de4b806c5062\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2cdfr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.804256 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cb03af8f-fc04-46b0-9f6e-d4700a872288-proxy-tls\") pod \"machine-config-operator-74547568cd-7vrhj\" (UID: \"cb03af8f-fc04-46b0-9f6e-d4700a872288\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.805620 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb03af8f-fc04-46b0-9f6e-d4700a872288-auth-proxy-config\") pod \"machine-config-operator-74547568cd-7vrhj\" (UID: \"cb03af8f-fc04-46b0-9f6e-d4700a872288\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.805845 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a093e5d2-7fd4-488a-86ef-61bf9ae18728-metrics-tls\") pod \"dns-operator-744455d44c-c95cr\" (UID: \"a093e5d2-7fd4-488a-86ef-61bf9ae18728\") " pod="openshift-dns-operator/dns-operator-744455d44c-c95cr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.805944 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/267eadc3-080c-430c-82f2-0c64c71f1ea9-signing-key\") pod \"service-ca-9c57cc56f-cjnbj\" (UID: \"267eadc3-080c-430c-82f2-0c64c71f1ea9\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjnbj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.806070 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eaba190-637d-4895-8a03-de4b806c5062-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2cdfr\" (UID: \"4eaba190-637d-4895-8a03-de4b806c5062\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2cdfr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.806325 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cb03af8f-fc04-46b0-9f6e-d4700a872288-images\") pod \"machine-config-operator-74547568cd-7vrhj\" (UID: \"cb03af8f-fc04-46b0-9f6e-d4700a872288\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.806856 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f42071fc-81cc-463c-8648-bbf11d175271-metrics-tls\") pod \"ingress-operator-5b745b69d9-7vjbc\" (UID: \"f42071fc-81cc-463c-8648-bbf11d175271\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.807185 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-registry-tls\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.807303 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/267eadc3-080c-430c-82f2-0c64c71f1ea9-signing-cabundle\") pod \"service-ca-9c57cc56f-cjnbj\" (UID: \"267eadc3-080c-430c-82f2-0c64c71f1ea9\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjnbj" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.807739 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7df03872-d3ee-4701-832a-1aaf25d48f15-srv-cert\") pod \"olm-operator-6b444d44fb-vbfc2\" (UID: \"7df03872-d3ee-4701-832a-1aaf25d48f15\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.810703 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.811840 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.812241 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.812876 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ba71cb2-abfc-47b7-9416-058006a9798a-srv-cert\") pod \"catalog-operator-68c6474976-c6gjz\" (UID: \"9ba71cb2-abfc-47b7-9416-058006a9798a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.813321 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de09e0e4-b898-412b-a745-e53ddf521bb0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v8stz\" (UID: \"de09e0e4-b898-412b-a745-e53ddf521bb0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v8stz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.813397 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.815038 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4311a6d0-765a-489b-91da-cb4e23ca2617-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qdv5c\" (UID: \"4311a6d0-765a-489b-91da-cb4e23ca2617\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.815609 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/448247d5-7682-47db-a6cf-60743d53bf92-serving-cert\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.815722 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/448247d5-7682-47db-a6cf-60743d53bf92-etcd-ca\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.816627 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/868b9120-1a06-4845-8e31-4f9bbb6d5a76-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nx4jv\" (UID: \"868b9120-1a06-4845-8e31-4f9bbb6d5a76\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx4jv" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.817452 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0a428da-f914-4463-8baf-899aed0920b1-secret-volume\") pod \"collect-profiles-29493210-sh45s\" (UID: \"e0a428da-f914-4463-8baf-899aed0920b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.820209 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnr5t\" (UniqueName: \"kubernetes.io/projected/de09e0e4-b898-412b-a745-e53ddf521bb0-kube-api-access-bnr5t\") pod \"openshift-controller-manager-operator-756b6f6bc6-v8stz\" (UID: \"de09e0e4-b898-412b-a745-e53ddf521bb0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v8stz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.823025 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/448247d5-7682-47db-a6cf-60743d53bf92-etcd-client\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.823374 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/448247d5-7682-47db-a6cf-60743d53bf92-etcd-service-ca\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.824734 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e7b0527-d0de-4bc3-b59d-29b49799cad3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-x6nbm\" (UID: \"8e7b0527-d0de-4bc3-b59d-29b49799cad3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.825791 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bf964fdc-5a3b-4f58-a73f-63d5af9f456e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lb5dv\" (UID: \"bf964fdc-5a3b-4f58-a73f-63d5af9f456e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lb5dv" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.825859 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxk69"] Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.826633 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5"] Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.826871 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e7b0527-d0de-4bc3-b59d-29b49799cad3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-x6nbm\" (UID: \"8e7b0527-d0de-4bc3-b59d-29b49799cad3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.828277 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.828536 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwjpt\" (UniqueName: \"kubernetes.io/projected/9ba71cb2-abfc-47b7-9416-058006a9798a-kube-api-access-cwjpt\") pod \"catalog-operator-68c6474976-c6gjz\" (UID: \"9ba71cb2-abfc-47b7-9416-058006a9798a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.831615 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.833408 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.833472 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f7gz\" (UniqueName: \"kubernetes.io/projected/448247d5-7682-47db-a6cf-60743d53bf92-kube-api-access-2f7gz\") pod \"etcd-operator-b45778765-4qnqk\" (UID: \"448247d5-7682-47db-a6cf-60743d53bf92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.834018 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4311a6d0-765a-489b-91da-cb4e23ca2617-proxy-tls\") pod \"machine-config-controller-84d6567774-qdv5c\" (UID: \"4311a6d0-765a-489b-91da-cb4e23ca2617\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.834442 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7df03872-d3ee-4701-832a-1aaf25d48f15-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vbfc2\" (UID: \"7df03872-d3ee-4701-832a-1aaf25d48f15\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.836154 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9efa93c-e7ba-418e-b5f0-bd063e650ee8-serving-cert\") pod \"console-operator-58897d9998-s5d86\" (UID: \"c9efa93c-e7ba-418e-b5f0-bd063e650ee8\") " pod="openshift-console-operator/console-operator-58897d9998-s5d86" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852113 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh7jw\" (UniqueName: \"kubernetes.io/projected/79fa6cc2-925b-4ae3-b774-5399ff66fa5b-kube-api-access-mh7jw\") pod \"machine-config-server-fm4xr\" (UID: \"79fa6cc2-925b-4ae3-b774-5399ff66fa5b\") " pod="openshift-machine-config-operator/machine-config-server-fm4xr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852160 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2mlf\" (UniqueName: \"kubernetes.io/projected/432a29bc-0436-4d07-bb40-25bb08020746-kube-api-access-g2mlf\") pod \"control-plane-machine-set-operator-78cbb6b69f-fzvkk\" (UID: \"432a29bc-0436-4d07-bb40-25bb08020746\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fzvkk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852192 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c2380f3e-65de-4df8-b674-a4c9b16d7218-tmpfs\") pod \"packageserver-d55dfcdfc-vrdhn\" (UID: \"c2380f3e-65de-4df8-b674-a4c9b16d7218\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852212 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d75bce32-7014-4fc3-8721-c8a9f231455f-cert\") pod \"ingress-canary-6v6rm\" (UID: \"d75bce32-7014-4fc3-8721-c8a9f231455f\") " pod="openshift-ingress-canary/ingress-canary-6v6rm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852232 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38416160-6197-4351-89dd-65bc4289993b-metrics-certs\") pod \"router-default-5444994796-zghzx\" (UID: \"38416160-6197-4351-89dd-65bc4289993b\") " pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852270 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/432a29bc-0436-4d07-bb40-25bb08020746-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fzvkk\" (UID: \"432a29bc-0436-4d07-bb40-25bb08020746\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fzvkk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852303 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjnbb\" (UniqueName: \"kubernetes.io/projected/af4b340f-065b-432e-a0be-d19647b019cb-kube-api-access-fjnbb\") pod \"dns-default-q62td\" (UID: \"af4b340f-065b-432e-a0be-d19647b019cb\") " pod="openshift-dns/dns-default-q62td" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852329 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gbwg\" (UniqueName: \"kubernetes.io/projected/38416160-6197-4351-89dd-65bc4289993b-kube-api-access-8gbwg\") pod \"router-default-5444994796-zghzx\" (UID: \"38416160-6197-4351-89dd-65bc4289993b\") " pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852368 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/79fa6cc2-925b-4ae3-b774-5399ff66fa5b-node-bootstrap-token\") pod \"machine-config-server-fm4xr\" (UID: \"79fa6cc2-925b-4ae3-b774-5399ff66fa5b\") " pod="openshift-machine-config-operator/machine-config-server-fm4xr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852425 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eae6a81-70da-45dd-88f9-bd270294b1a0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-djbr5\" (UID: \"4eae6a81-70da-45dd-88f9-bd270294b1a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-djbr5" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852449 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tkljg\" (UID: \"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852472 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tkljg\" (UID: \"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852501 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b4340feb-9a36-45f1-8a67-8df7b8040cf0-plugins-dir\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852517 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ff879b4-32b1-430b-bf3c-9f020c76ba0f-config\") pod \"kube-controller-manager-operator-78b949d7b-vsxgl\" (UID: \"5ff879b4-32b1-430b-bf3c-9f020c76ba0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxgl" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852538 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4btx\" (UniqueName: \"kubernetes.io/projected/b4340feb-9a36-45f1-8a67-8df7b8040cf0-kube-api-access-z4btx\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852557 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54qw2\" (UniqueName: \"kubernetes.io/projected/c2380f3e-65de-4df8-b674-a4c9b16d7218-kube-api-access-54qw2\") pod \"packageserver-d55dfcdfc-vrdhn\" (UID: \"c2380f3e-65de-4df8-b674-a4c9b16d7218\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852573 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38416160-6197-4351-89dd-65bc4289993b-service-ca-bundle\") pod \"router-default-5444994796-zghzx\" (UID: \"38416160-6197-4351-89dd-65bc4289993b\") " pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852589 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af4b340f-065b-432e-a0be-d19647b019cb-config-volume\") pod \"dns-default-q62td\" (UID: \"af4b340f-065b-432e-a0be-d19647b019cb\") " pod="openshift-dns/dns-default-q62td" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852628 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/79fa6cc2-925b-4ae3-b774-5399ff66fa5b-certs\") pod \"machine-config-server-fm4xr\" (UID: \"79fa6cc2-925b-4ae3-b774-5399ff66fa5b\") " pod="openshift-machine-config-operator/machine-config-server-fm4xr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852649 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eae6a81-70da-45dd-88f9-bd270294b1a0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-djbr5\" (UID: \"4eae6a81-70da-45dd-88f9-bd270294b1a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-djbr5" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852667 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ff879b4-32b1-430b-bf3c-9f020c76ba0f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vsxgl\" (UID: \"5ff879b4-32b1-430b-bf3c-9f020c76ba0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxgl" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852704 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b4340feb-9a36-45f1-8a67-8df7b8040cf0-csi-data-dir\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852735 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2380f3e-65de-4df8-b674-a4c9b16d7218-apiservice-cert\") pod \"packageserver-d55dfcdfc-vrdhn\" (UID: \"c2380f3e-65de-4df8-b674-a4c9b16d7218\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852824 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852849 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/38416160-6197-4351-89dd-65bc4289993b-default-certificate\") pod \"router-default-5444994796-zghzx\" (UID: \"38416160-6197-4351-89dd-65bc4289993b\") " pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852866 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff879b4-32b1-430b-bf3c-9f020c76ba0f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vsxgl\" (UID: \"5ff879b4-32b1-430b-bf3c-9f020c76ba0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxgl" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852901 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b4340feb-9a36-45f1-8a67-8df7b8040cf0-mountpoint-dir\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852916 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dljl2\" (UniqueName: \"kubernetes.io/projected/d75bce32-7014-4fc3-8721-c8a9f231455f-kube-api-access-dljl2\") pod \"ingress-canary-6v6rm\" (UID: \"d75bce32-7014-4fc3-8721-c8a9f231455f\") " pod="openshift-ingress-canary/ingress-canary-6v6rm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852933 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2380f3e-65de-4df8-b674-a4c9b16d7218-webhook-cert\") pod \"packageserver-d55dfcdfc-vrdhn\" (UID: \"c2380f3e-65de-4df8-b674-a4c9b16d7218\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852948 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzqgh\" (UniqueName: \"kubernetes.io/projected/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-kube-api-access-fzqgh\") pod \"marketplace-operator-79b997595-tkljg\" (UID: \"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852976 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/38416160-6197-4351-89dd-65bc4289993b-stats-auth\") pod \"router-default-5444994796-zghzx\" (UID: \"38416160-6197-4351-89dd-65bc4289993b\") " pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.852991 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b4340feb-9a36-45f1-8a67-8df7b8040cf0-socket-dir\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.853008 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b4340feb-9a36-45f1-8a67-8df7b8040cf0-registration-dir\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.853030 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af4b340f-065b-432e-a0be-d19647b019cb-metrics-tls\") pod \"dns-default-q62td\" (UID: \"af4b340f-065b-432e-a0be-d19647b019cb\") " pod="openshift-dns/dns-default-q62td" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.853065 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pbgg\" (UniqueName: \"kubernetes.io/projected/4eae6a81-70da-45dd-88f9-bd270294b1a0-kube-api-access-8pbgg\") pod \"kube-storage-version-migrator-operator-b67b599dd-djbr5\" (UID: \"4eae6a81-70da-45dd-88f9-bd270294b1a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-djbr5" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.853401 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b4340feb-9a36-45f1-8a67-8df7b8040cf0-csi-data-dir\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.857880 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b4340feb-9a36-45f1-8a67-8df7b8040cf0-plugins-dir\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.858040 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v8stz" Jan 28 09:44:48 crc kubenswrapper[4735]: E0128 09:44:48.858433 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:49.358412913 +0000 UTC m=+142.508077276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.860456 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b4340feb-9a36-45f1-8a67-8df7b8040cf0-socket-dir\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.860535 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b4340feb-9a36-45f1-8a67-8df7b8040cf0-registration-dir\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.862679 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b4340feb-9a36-45f1-8a67-8df7b8040cf0-mountpoint-dir\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.863143 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c2380f3e-65de-4df8-b674-a4c9b16d7218-tmpfs\") pod \"packageserver-d55dfcdfc-vrdhn\" (UID: \"c2380f3e-65de-4df8-b674-a4c9b16d7218\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.865666 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eae6a81-70da-45dd-88f9-bd270294b1a0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-djbr5\" (UID: \"4eae6a81-70da-45dd-88f9-bd270294b1a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-djbr5" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.865878 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vl9j\" (UniqueName: \"kubernetes.io/projected/8e7b0527-d0de-4bc3-b59d-29b49799cad3-kube-api-access-8vl9j\") pod \"cluster-image-registry-operator-dc59b4c8b-x6nbm\" (UID: \"8e7b0527-d0de-4bc3-b59d-29b49799cad3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.867123 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/432a29bc-0436-4d07-bb40-25bb08020746-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fzvkk\" (UID: \"432a29bc-0436-4d07-bb40-25bb08020746\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fzvkk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.869928 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/79fa6cc2-925b-4ae3-b774-5399ff66fa5b-node-bootstrap-token\") pod \"machine-config-server-fm4xr\" (UID: \"79fa6cc2-925b-4ae3-b774-5399ff66fa5b\") " pod="openshift-machine-config-operator/machine-config-server-fm4xr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.871692 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eae6a81-70da-45dd-88f9-bd270294b1a0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-djbr5\" (UID: \"4eae6a81-70da-45dd-88f9-bd270294b1a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-djbr5" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.873968 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2380f3e-65de-4df8-b674-a4c9b16d7218-apiservice-cert\") pod \"packageserver-d55dfcdfc-vrdhn\" (UID: \"c2380f3e-65de-4df8-b674-a4c9b16d7218\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.877804 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tkljg\" (UID: \"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.879386 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.882476 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2380f3e-65de-4df8-b674-a4c9b16d7218-webhook-cert\") pod \"packageserver-d55dfcdfc-vrdhn\" (UID: \"c2380f3e-65de-4df8-b674-a4c9b16d7218\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.891554 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tkljg\" (UID: \"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.891876 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af4b340f-065b-432e-a0be-d19647b019cb-config-volume\") pod \"dns-default-q62td\" (UID: \"af4b340f-065b-432e-a0be-d19647b019cb\") " pod="openshift-dns/dns-default-q62td" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.892603 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqd5g\" (UniqueName: \"kubernetes.io/projected/11920696-6788-4b4a-9c6e-cbe4e3026be5-kube-api-access-pqd5g\") pod \"migrator-59844c95c7-pgmcm\" (UID: \"11920696-6788-4b4a-9c6e-cbe4e3026be5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pgmcm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.894531 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/79fa6cc2-925b-4ae3-b774-5399ff66fa5b-certs\") pod \"machine-config-server-fm4xr\" (UID: \"79fa6cc2-925b-4ae3-b774-5399ff66fa5b\") " pod="openshift-machine-config-operator/machine-config-server-fm4xr" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.895060 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d75bce32-7014-4fc3-8721-c8a9f231455f-cert\") pod \"ingress-canary-6v6rm\" (UID: \"d75bce32-7014-4fc3-8721-c8a9f231455f\") " pod="openshift-ingress-canary/ingress-canary-6v6rm" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.898630 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9"] Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.900364 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxf9x\" (UniqueName: \"kubernetes.io/projected/c9efa93c-e7ba-418e-b5f0-bd063e650ee8-kube-api-access-fxf9x\") pod \"console-operator-58897d9998-s5d86\" (UID: \"c9efa93c-e7ba-418e-b5f0-bd063e650ee8\") " pod="openshift-console-operator/console-operator-58897d9998-s5d86" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.900868 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ff879b4-32b1-430b-bf3c-9f020c76ba0f-config\") pod \"kube-controller-manager-operator-78b949d7b-vsxgl\" (UID: \"5ff879b4-32b1-430b-bf3c-9f020c76ba0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxgl" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.903131 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af4b340f-065b-432e-a0be-d19647b019cb-metrics-tls\") pod \"dns-default-q62td\" (UID: \"af4b340f-065b-432e-a0be-d19647b019cb\") " pod="openshift-dns/dns-default-q62td" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.904002 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/38416160-6197-4351-89dd-65bc4289993b-default-certificate\") pod \"router-default-5444994796-zghzx\" (UID: \"38416160-6197-4351-89dd-65bc4289993b\") " pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.904234 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38416160-6197-4351-89dd-65bc4289993b-metrics-certs\") pod \"router-default-5444994796-zghzx\" (UID: \"38416160-6197-4351-89dd-65bc4289993b\") " pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.904474 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38416160-6197-4351-89dd-65bc4289993b-service-ca-bundle\") pod \"router-default-5444994796-zghzx\" (UID: \"38416160-6197-4351-89dd-65bc4289993b\") " pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.906982 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff879b4-32b1-430b-bf3c-9f020c76ba0f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vsxgl\" (UID: \"5ff879b4-32b1-430b-bf3c-9f020c76ba0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxgl" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.912570 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/38416160-6197-4351-89dd-65bc4289993b-stats-auth\") pod \"router-default-5444994796-zghzx\" (UID: \"38416160-6197-4351-89dd-65bc4289993b\") " pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.921788 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpjnm\" (UniqueName: \"kubernetes.io/projected/cb03af8f-fc04-46b0-9f6e-d4700a872288-kube-api-access-vpjnm\") pod \"machine-config-operator-74547568cd-7vrhj\" (UID: \"cb03af8f-fc04-46b0-9f6e-d4700a872288\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" Jan 28 09:44:48 crc kubenswrapper[4735]: W0128 09:44:48.925583 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe367f89_eb40_437c_b0e0_c7d9ac7dd376.slice/crio-320647c3797b16246fdb632c2120f07e7ec25d2542c001ebe72c70b190dfa29b WatchSource:0}: Error finding container 320647c3797b16246fdb632c2120f07e7ec25d2542c001ebe72c70b190dfa29b: Status 404 returned error can't find the container with id 320647c3797b16246fdb632c2120f07e7ec25d2542c001ebe72c70b190dfa29b Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.933217 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9rqk\" (UniqueName: \"kubernetes.io/projected/e0a428da-f914-4463-8baf-899aed0920b1-kube-api-access-j9rqk\") pod \"collect-profiles-29493210-sh45s\" (UID: \"e0a428da-f914-4463-8baf-899aed0920b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.942844 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8rw4p"] Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.957067 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:48 crc kubenswrapper[4735]: E0128 09:44:48.958696 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:49.458666865 +0000 UTC m=+142.608331248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.964523 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmhlj\" (UniqueName: \"kubernetes.io/projected/3c8d118a-3940-4efc-a7f3-16cb8de02990-kube-api-access-xmhlj\") pod \"oauth-openshift-558db77b4-fhwqk\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.972292 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.978311 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-bound-sa-token\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:48 crc kubenswrapper[4735]: I0128 09:44:48.994041 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9mmgl"] Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.005701 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.007589 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r45n\" (UniqueName: \"kubernetes.io/projected/4311a6d0-765a-489b-91da-cb4e23ca2617-kube-api-access-4r45n\") pod \"machine-config-controller-84d6567774-qdv5c\" (UID: \"4311a6d0-765a-489b-91da-cb4e23ca2617\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c" Jan 28 09:44:49 crc kubenswrapper[4735]: W0128 09:44:49.012211 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e5842b2_66a0_4bd1_a7b0_b1fdf27b1769.slice/crio-191c8525c5accf7b9b0b09028edd5a698644226b65b1ba046da9d58e870ffc08 WatchSource:0}: Error finding container 191c8525c5accf7b9b0b09028edd5a698644226b65b1ba046da9d58e870ffc08: Status 404 returned error can't find the container with id 191c8525c5accf7b9b0b09028edd5a698644226b65b1ba046da9d58e870ffc08 Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.015483 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl8gp\" (UniqueName: \"kubernetes.io/projected/a093e5d2-7fd4-488a-86ef-61bf9ae18728-kube-api-access-hl8gp\") pod \"dns-operator-744455d44c-c95cr\" (UID: \"a093e5d2-7fd4-488a-86ef-61bf9ae18728\") " pod="openshift-dns-operator/dns-operator-744455d44c-c95cr" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.030012 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18ae77ef-0b75-46bb-8eda-a99e9f12925c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dk28j\" (UID: \"18ae77ef-0b75-46bb-8eda-a99e9f12925c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dk28j" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.037397 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pgmcm" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.048566 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.054194 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2zb2\" (UniqueName: \"kubernetes.io/projected/f42071fc-81cc-463c-8648-bbf11d175271-kube-api-access-h2zb2\") pod \"ingress-operator-5b745b69d9-7vjbc\" (UID: \"f42071fc-81cc-463c-8648-bbf11d175271\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.059891 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:49 crc kubenswrapper[4735]: E0128 09:44:49.060671 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:49.560642302 +0000 UTC m=+142.710306845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.076724 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7nq2\" (UniqueName: \"kubernetes.io/projected/7df03872-d3ee-4701-832a-1aaf25d48f15-kube-api-access-s7nq2\") pod \"olm-operator-6b444d44fb-vbfc2\" (UID: \"7df03872-d3ee-4701-832a-1aaf25d48f15\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.077569 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8"] Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.095175 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbgzt\" (UniqueName: \"kubernetes.io/projected/bf964fdc-5a3b-4f58-a73f-63d5af9f456e-kube-api-access-qbgzt\") pod \"multus-admission-controller-857f4d67dd-lb5dv\" (UID: \"bf964fdc-5a3b-4f58-a73f-63d5af9f456e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lb5dv" Jan 28 09:44:49 crc kubenswrapper[4735]: W0128 09:44:49.116520 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5693ded5_11d5_4a9e_9c3e_737d02dcec96.slice/crio-dbcecc159a9e65ec710ca1728f14c1a37f7fa26138bced2bee2dad2708d8b2e9 WatchSource:0}: Error finding container dbcecc159a9e65ec710ca1728f14c1a37f7fa26138bced2bee2dad2708d8b2e9: Status 404 returned error can't find the container with id dbcecc159a9e65ec710ca1728f14c1a37f7fa26138bced2bee2dad2708d8b2e9 Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.118068 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lb5dv" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.124928 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n945q\" (UniqueName: \"kubernetes.io/projected/bfc4463e-9f59-4e22-b40d-ebf47cc84969-kube-api-access-n945q\") pod \"console-f9d7485db-6dsdg\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.147683 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-s5d86" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.147699 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e7b0527-d0de-4bc3-b59d-29b49799cad3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-x6nbm\" (UID: \"8e7b0527-d0de-4bc3-b59d-29b49799cad3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.151389 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8v2t\" (UniqueName: \"kubernetes.io/projected/868b9120-1a06-4845-8e31-4f9bbb6d5a76-kube-api-access-r8v2t\") pod \"package-server-manager-789f6589d5-nx4jv\" (UID: \"868b9120-1a06-4845-8e31-4f9bbb6d5a76\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx4jv" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.167586 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.169123 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f42071fc-81cc-463c-8648-bbf11d175271-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7vjbc\" (UID: \"f42071fc-81cc-463c-8648-bbf11d175271\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" Jan 28 09:44:49 crc kubenswrapper[4735]: E0128 09:44:49.176030 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:49.675974826 +0000 UTC m=+142.825639189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.192516 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8xlx7"] Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.195155 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.197641 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.201078 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtmbb\" (UniqueName: \"kubernetes.io/projected/139d3ad6-aaa8-4510-b5d1-6284f8433959-kube-api-access-wtmbb\") pod \"cluster-samples-operator-665b6dd947-gkb25\" (UID: \"139d3ad6-aaa8-4510-b5d1-6284f8433959\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gkb25" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.205631 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.220744 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brmvt\" (UniqueName: \"kubernetes.io/projected/03935bf4-59fa-4f90-9307-450ea6b22983-kube-api-access-brmvt\") pod \"service-ca-operator-777779d784-2vkwm\" (UID: \"03935bf4-59fa-4f90-9307-450ea6b22983\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vkwm" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.230111 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvrf2\" (UniqueName: \"kubernetes.io/projected/267eadc3-080c-430c-82f2-0c64c71f1ea9-kube-api-access-pvrf2\") pod \"service-ca-9c57cc56f-cjnbj\" (UID: \"267eadc3-080c-430c-82f2-0c64c71f1ea9\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjnbj" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.252037 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.255771 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g77fq\" (UniqueName: \"kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-kube-api-access-g77fq\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.258364 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-c95cr" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.265487 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dk28j" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.277563 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.280918 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.283111 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v8stz"] Jan 28 09:44:49 crc kubenswrapper[4735]: E0128 09:44:49.284118 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:49.784096563 +0000 UTC m=+142.933760936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.290286 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4eaba190-637d-4895-8a03-de4b806c5062-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2cdfr\" (UID: \"4eaba190-637d-4895-8a03-de4b806c5062\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2cdfr" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.293962 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.314112 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cjnbj" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.314950 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh7jw\" (UniqueName: \"kubernetes.io/projected/79fa6cc2-925b-4ae3-b774-5399ff66fa5b-kube-api-access-mh7jw\") pod \"machine-config-server-fm4xr\" (UID: \"79fa6cc2-925b-4ae3-b774-5399ff66fa5b\") " pod="openshift-machine-config-operator/machine-config-server-fm4xr" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.322733 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vkwm" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.327976 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx4jv" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.347766 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" event={"ID":"fe367f89-eb40-437c-b0e0-c7d9ac7dd376","Type":"ContainerStarted","Data":"320647c3797b16246fdb632c2120f07e7ec25d2542c001ebe72c70b190dfa29b"} Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.357306 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pbgg\" (UniqueName: \"kubernetes.io/projected/4eae6a81-70da-45dd-88f9-bd270294b1a0-kube-api-access-8pbgg\") pod \"kube-storage-version-migrator-operator-b67b599dd-djbr5\" (UID: \"4eae6a81-70da-45dd-88f9-bd270294b1a0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-djbr5" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.357925 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2mlf\" (UniqueName: \"kubernetes.io/projected/432a29bc-0436-4d07-bb40-25bb08020746-kube-api-access-g2mlf\") pod \"control-plane-machine-set-operator-78cbb6b69f-fzvkk\" (UID: \"432a29bc-0436-4d07-bb40-25bb08020746\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fzvkk" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.358782 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxk69" event={"ID":"7a91b319-14fa-4e1a-90f7-5d9ae9758ad1","Type":"ContainerStarted","Data":"f24c6611001b2db87b574250b39a93f6267ba247c414594943006405c0600f8f"} Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.358847 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxk69" event={"ID":"7a91b319-14fa-4e1a-90f7-5d9ae9758ad1","Type":"ContainerStarted","Data":"7edb5093c92f45276fb41f637aa78364979ce678fb0541cd0358784920ca77c2"} Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.369713 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" event={"ID":"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769","Type":"ContainerStarted","Data":"191c8525c5accf7b9b0b09028edd5a698644226b65b1ba046da9d58e870ffc08"} Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.375222 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ff879b4-32b1-430b-bf3c-9f020c76ba0f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vsxgl\" (UID: \"5ff879b4-32b1-430b-bf3c-9f020c76ba0f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxgl" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.378421 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.378617 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8" event={"ID":"5693ded5-11d5-4a9e-9c3e-737d02dcec96","Type":"ContainerStarted","Data":"dbcecc159a9e65ec710ca1728f14c1a37f7fa26138bced2bee2dad2708d8b2e9"} Jan 28 09:44:49 crc kubenswrapper[4735]: E0128 09:44:49.379052 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:49.879029627 +0000 UTC m=+143.028694000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.390157 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fzvkk" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.396948 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" event={"ID":"e308a994-46ac-4ad4-9b44-f4e0cc910c81","Type":"ContainerStarted","Data":"4bb93cc640376a3b9f3eabc7d6569f4088f61694220292c372ac3ffc05df6c69"} Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.397026 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" event={"ID":"e308a994-46ac-4ad4-9b44-f4e0cc910c81","Type":"ContainerStarted","Data":"b944d89c49de71cc4998813c3bfa3d87cb5602f39b39b343fa7e1894adf9d01b"} Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.402230 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.406679 4735 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-dnhz5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.406769 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" podUID="e308a994-46ac-4ad4-9b44-f4e0cc910c81" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.411069 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjnbb\" (UniqueName: \"kubernetes.io/projected/af4b340f-065b-432e-a0be-d19647b019cb-kube-api-access-fjnbb\") pod \"dns-default-q62td\" (UID: \"af4b340f-065b-432e-a0be-d19647b019cb\") " pod="openshift-dns/dns-default-q62td" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.415869 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gbwg\" (UniqueName: \"kubernetes.io/projected/38416160-6197-4351-89dd-65bc4289993b-kube-api-access-8gbwg\") pod \"router-default-5444994796-zghzx\" (UID: \"38416160-6197-4351-89dd-65bc4289993b\") " pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.421291 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-djbr5" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.435003 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gkb25" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.436119 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" event={"ID":"5f918d4c-80ee-435f-a363-f630e97965a0","Type":"ContainerStarted","Data":"f9fee472efbcb574b389895fe76c6533d89af75415c9c1f805f0643b2b7107f5"} Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.443373 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" event={"ID":"8a1b5d2f-fff3-4843-875e-001584dbbba1","Type":"ContainerStarted","Data":"f9ccba72c92be6b0635637189f471de19fb6673bf84185d82bbbfb64abffb0ce"} Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.443431 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" event={"ID":"8a1b5d2f-fff3-4843-875e-001584dbbba1","Type":"ContainerStarted","Data":"4e5e021b7ae4c17c14fd403e22dcb9de4daa1877c5938f94e0bb8c861245e491"} Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.444906 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8xlx7" event={"ID":"9c6798d5-2a74-4a6f-95dc-184b82db4a25","Type":"ContainerStarted","Data":"c5515aba99a5eb1a3b7fb235653b089d250411abb034557508d7572be86ff22f"} Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.447029 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4btx\" (UniqueName: \"kubernetes.io/projected/b4340feb-9a36-45f1-8a67-8df7b8040cf0-kube-api-access-z4btx\") pod \"csi-hostpathplugin-2zhvr\" (UID: \"b4340feb-9a36-45f1-8a67-8df7b8040cf0\") " pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.458360 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" event={"ID":"8dcd226c-c318-412c-9483-2f359c9a1b75","Type":"ContainerStarted","Data":"7310c184b1ad189143156c26dac60948c24e63b143f56e24556b91fb681acba5"} Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.458421 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" event={"ID":"8dcd226c-c318-412c-9483-2f359c9a1b75","Type":"ContainerStarted","Data":"6c11de2bdb1bb64baad26b7ec41fac4d8b19ef2a304d60208ae2ab60f71b4616"} Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.459313 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.464892 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fm4xr" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.468229 4735 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-knnrt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.468294 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" podUID="8dcd226c-c318-412c-9483-2f359c9a1b75" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.471538 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54qw2\" (UniqueName: \"kubernetes.io/projected/c2380f3e-65de-4df8-b674-a4c9b16d7218-kube-api-access-54qw2\") pod \"packageserver-d55dfcdfc-vrdhn\" (UID: \"c2380f3e-65de-4df8-b674-a4c9b16d7218\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.472837 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-q62td" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.479622 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" event={"ID":"c5d19be7-214e-4428-835a-5c325772fe6e","Type":"ContainerStarted","Data":"4de9b69336f759b3a12739fc6d314655c5cd4f3b7f5a05fe3d6c93ae62cc5ff4"} Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.479668 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" event={"ID":"c5d19be7-214e-4428-835a-5c325772fe6e","Type":"ContainerStarted","Data":"a2626a4670cbffbff152c224c14180877f4e08d2b30571f8ba9346e8fe4995db"} Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.483903 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:49 crc kubenswrapper[4735]: E0128 09:44:49.485108 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:49.98508393 +0000 UTC m=+143.134748303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.489380 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4qnqk"] Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.491011 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzqgh\" (UniqueName: \"kubernetes.io/projected/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-kube-api-access-fzqgh\") pod \"marketplace-operator-79b997595-tkljg\" (UID: \"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa\") " pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.497944 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dljl2\" (UniqueName: \"kubernetes.io/projected/d75bce32-7014-4fc3-8721-c8a9f231455f-kube-api-access-dljl2\") pod \"ingress-canary-6v6rm\" (UID: \"d75bce32-7014-4fc3-8721-c8a9f231455f\") " pod="openshift-ingress-canary/ingress-canary-6v6rm" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.585661 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:49 crc kubenswrapper[4735]: E0128 09:44:49.586876 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:50.086840452 +0000 UTC m=+143.236504825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.592236 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz"] Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.587450 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2cdfr" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.603134 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj"] Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.654737 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.664245 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pgmcm"] Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.667783 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxgl" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.693023 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.694543 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:49 crc kubenswrapper[4735]: E0128 09:44:49.695028 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:50.195012129 +0000 UTC m=+143.344676502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:49 crc kubenswrapper[4735]: W0128 09:44:49.697824 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod448247d5_7682_47db_a6cf_60743d53bf92.slice/crio-28974cc05c430d3692d6072be55f631e4be895f07b9a121cb7f85620c27ccd24 WatchSource:0}: Error finding container 28974cc05c430d3692d6072be55f631e4be895f07b9a121cb7f85620c27ccd24: Status 404 returned error can't find the container with id 28974cc05c430d3692d6072be55f631e4be895f07b9a121cb7f85620c27ccd24 Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.732300 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.743152 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.768637 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6v6rm" Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.796757 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:49 crc kubenswrapper[4735]: E0128 09:44:49.797240 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:50.297217782 +0000 UTC m=+143.446882155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.898461 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:49 crc kubenswrapper[4735]: E0128 09:44:49.899100 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:50.399073526 +0000 UTC m=+143.548737899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.992648 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lb5dv"] Jan 28 09:44:49 crc kubenswrapper[4735]: I0128 09:44:49.995106 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-s5d86"] Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.001355 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:50 crc kubenswrapper[4735]: E0128 09:44:50.001930 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:50.501901085 +0000 UTC m=+143.651565458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.026797 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s"] Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.108328 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:50 crc kubenswrapper[4735]: E0128 09:44:50.109420 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:50.609394306 +0000 UTC m=+143.759058679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.182667 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dk28j"] Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.210059 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:50 crc kubenswrapper[4735]: E0128 09:44:50.210673 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:50.710629603 +0000 UTC m=+143.860293976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.211536 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:50 crc kubenswrapper[4735]: E0128 09:44:50.212262 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:50.712251815 +0000 UTC m=+143.861916188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.213207 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm"] Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.314685 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:50 crc kubenswrapper[4735]: E0128 09:44:50.315115 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:50.815089305 +0000 UTC m=+143.964753678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.416610 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:50 crc kubenswrapper[4735]: E0128 09:44:50.417385 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:50.91735649 +0000 UTC m=+144.067021023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.502410 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v8stz" event={"ID":"de09e0e4-b898-412b-a745-e53ddf521bb0","Type":"ContainerStarted","Data":"99d27402d821fb335c335c60c80e938d06dcbda8777f4ecfd2939c6b7af55c40"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.510718 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" event={"ID":"8a1b5d2f-fff3-4843-875e-001584dbbba1","Type":"ContainerStarted","Data":"1daa41ac8c17a2222b4a888b0cb1d3c12400f4b273f96e7cd40ac2d72c602098"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.519699 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:50 crc kubenswrapper[4735]: E0128 09:44:50.520021 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:51.019983163 +0000 UTC m=+144.169647536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.520152 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:50 crc kubenswrapper[4735]: E0128 09:44:50.522200 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:51.022182321 +0000 UTC m=+144.171846694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.529004 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" event={"ID":"c5d19be7-214e-4428-835a-5c325772fe6e","Type":"ContainerStarted","Data":"c0e51ea84eb2530b38c762997f79727d10b775f7972c5c00686b1ffccc0ac08c"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.532912 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" event={"ID":"cb03af8f-fc04-46b0-9f6e-d4700a872288","Type":"ContainerStarted","Data":"22fb08cb85462d38735c00cb1d970c2720871541ec500e79a3928cfa690f6274"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.535977 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8xlx7" event={"ID":"9c6798d5-2a74-4a6f-95dc-184b82db4a25","Type":"ContainerStarted","Data":"5e56121f7c857cc39c3bd9759e79024040e47e9f371255b7a150deba32a4d00e"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.537031 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-8xlx7" Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.543802 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fm4xr" event={"ID":"79fa6cc2-925b-4ae3-b774-5399ff66fa5b","Type":"ContainerStarted","Data":"bce5bcf7a359dcbeac38829c474e26a9c4b2376d05a98016543f22b390c04fd2"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.544736 4735 patch_prober.go:28] interesting pod/downloads-7954f5f757-8xlx7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.545008 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8xlx7" podUID="9c6798d5-2a74-4a6f-95dc-184b82db4a25" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.569011 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zghzx" event={"ID":"38416160-6197-4351-89dd-65bc4289993b","Type":"ContainerStarted","Data":"f8dbb597b1b9f3a2ddeae5b38ccc288b5860bcbc388add5fa281c3c83ff0cfbb"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.590961 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lb5dv" event={"ID":"bf964fdc-5a3b-4f58-a73f-63d5af9f456e","Type":"ContainerStarted","Data":"afdf609f5ea2e80ece8f3350d82eb896b52b4e8c64da6260ebea058d90913d88"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.614980 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pgmcm" event={"ID":"11920696-6788-4b4a-9c6e-cbe4e3026be5","Type":"ContainerStarted","Data":"a8a72bcbdfd16ef71829f058abb50a68e0c56757936ee06dfc638e1cfd08e198"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.621187 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:50 crc kubenswrapper[4735]: E0128 09:44:50.631847 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:51.131815646 +0000 UTC m=+144.281480019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.686130 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" podStartSLOduration=118.686091475 podStartE2EDuration="1m58.686091475s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:50.674194326 +0000 UTC m=+143.823858699" watchObservedRunningTime="2026-01-28 09:44:50.686091475 +0000 UTC m=+143.835755848" Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.698843 4735 generic.go:334] "Generic (PLEG): container finished" podID="fe367f89-eb40-437c-b0e0-c7d9ac7dd376" containerID="9744018afad493c527acdc3665799718c5ac03d7431e4e4813cb59b60cdde24c" exitCode=0 Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.699972 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" event={"ID":"fe367f89-eb40-437c-b0e0-c7d9ac7dd376","Type":"ContainerDied","Data":"9744018afad493c527acdc3665799718c5ac03d7431e4e4813cb59b60cdde24c"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.725934 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:50 crc kubenswrapper[4735]: E0128 09:44:50.726477 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:51.226460693 +0000 UTC m=+144.376125066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.726857 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dk28j" event={"ID":"18ae77ef-0b75-46bb-8eda-a99e9f12925c","Type":"ContainerStarted","Data":"327eb923e98ffdf9b9036b8cee83af077ca50858856c9a9934fa64cb8837e3ce"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.737566 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm" event={"ID":"8e7b0527-d0de-4bc3-b59d-29b49799cad3","Type":"ContainerStarted","Data":"658bc4ab84186687024cc675cc8761e2efe9f0cb7c05432a158550eb054fabb4"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.765497 4735 generic.go:334] "Generic (PLEG): container finished" podID="0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769" containerID="e86e62a8fa33dccfea6f4258f9129eb579ea0c7f67e0dc84b1ac1f8db3c06598" exitCode=0 Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.771923 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" event={"ID":"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769","Type":"ContainerDied","Data":"e86e62a8fa33dccfea6f4258f9129eb579ea0c7f67e0dc84b1ac1f8db3c06598"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.777450 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" podStartSLOduration=118.777426726 podStartE2EDuration="1m58.777426726s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:50.763378051 +0000 UTC m=+143.913042424" watchObservedRunningTime="2026-01-28 09:44:50.777426726 +0000 UTC m=+143.927091099" Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.782913 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" event={"ID":"5f918d4c-80ee-435f-a363-f630e97965a0","Type":"ContainerStarted","Data":"89e80d773cd60f487fc69d372843bb3b032ec6febf0553bcb4b9906a5c757d6c"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.794091 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" event={"ID":"e0a428da-f914-4463-8baf-899aed0920b1","Type":"ContainerStarted","Data":"1cc6909e587297bf820483d8c014d6d8f2bb083fe4385d850cc3e67edea5c8ac"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.822131 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz" event={"ID":"9ba71cb2-abfc-47b7-9416-058006a9798a","Type":"ContainerStarted","Data":"22569ef43d5510b706c45c3f4efe2393a5bd9823f0ac85b1130c5c211da40c4c"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.829909 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:50 crc kubenswrapper[4735]: E0128 09:44:50.830098 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:51.330079133 +0000 UTC m=+144.479743506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.830538 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:50 crc kubenswrapper[4735]: E0128 09:44:50.832144 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:51.332135466 +0000 UTC m=+144.481799839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.858427 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" event={"ID":"448247d5-7682-47db-a6cf-60743d53bf92","Type":"ContainerStarted","Data":"28974cc05c430d3692d6072be55f631e4be895f07b9a121cb7f85620c27ccd24"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.862164 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fhwqk"] Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.862198 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-s5d86" event={"ID":"c9efa93c-e7ba-418e-b5f0-bd063e650ee8","Type":"ContainerStarted","Data":"4ed96aef5c60ecfa462d094cb564041cc6f67d75f27dda6650eac3304b0ad14b"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.864719 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxk69" podStartSLOduration=118.864704921 podStartE2EDuration="1m58.864704921s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:50.846835408 +0000 UTC m=+143.996499781" watchObservedRunningTime="2026-01-28 09:44:50.864704921 +0000 UTC m=+144.014369294" Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.864919 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c"] Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.868740 4735 generic.go:334] "Generic (PLEG): container finished" podID="5693ded5-11d5-4a9e-9c3e-737d02dcec96" containerID="95887cb4f25a5d4eed963f056505466f94243cb23b452ff9bec6eda69571c6c6" exitCode=0 Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.869022 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8" event={"ID":"5693ded5-11d5-4a9e-9c3e-737d02dcec96","Type":"ContainerDied","Data":"95887cb4f25a5d4eed963f056505466f94243cb23b452ff9bec6eda69571c6c6"} Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.882539 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:44:50 crc kubenswrapper[4735]: I0128 09:44:50.934267 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:50 crc kubenswrapper[4735]: E0128 09:44:50.936007 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:51.435987161 +0000 UTC m=+144.585651524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.038240 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:51 crc kubenswrapper[4735]: E0128 09:44:51.040300 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:51.540285439 +0000 UTC m=+144.689949812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.139632 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:51 crc kubenswrapper[4735]: E0128 09:44:51.139740 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:51.63972039 +0000 UTC m=+144.789384763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.140295 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:51 crc kubenswrapper[4735]: E0128 09:44:51.140588 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:51.640580782 +0000 UTC m=+144.790245155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.145876 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-8xlx7" podStartSLOduration=119.145858949 podStartE2EDuration="1m59.145858949s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:51.11160333 +0000 UTC m=+144.261267703" watchObservedRunningTime="2026-01-28 09:44:51.145858949 +0000 UTC m=+144.295523322" Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.195729 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8rw4p" podStartSLOduration=119.195697633 podStartE2EDuration="1m59.195697633s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:51.147521512 +0000 UTC m=+144.297185885" watchObservedRunningTime="2026-01-28 09:44:51.195697633 +0000 UTC m=+144.345362006" Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.241150 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:51 crc kubenswrapper[4735]: E0128 09:44:51.241363 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:51.741331167 +0000 UTC m=+144.890995540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.241612 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:51 crc kubenswrapper[4735]: E0128 09:44:51.242147 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:51.742121138 +0000 UTC m=+144.891785511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.243668 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-wpjcc" podStartSLOduration=119.243654148 podStartE2EDuration="1m59.243654148s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:51.240338412 +0000 UTC m=+144.390002785" watchObservedRunningTime="2026-01-28 09:44:51.243654148 +0000 UTC m=+144.393318541" Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.269916 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.286018 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2cdfr"] Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.286638 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-b9brv" podStartSLOduration=119.286627884 podStartE2EDuration="1m59.286627884s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:51.285810402 +0000 UTC m=+144.435474795" watchObservedRunningTime="2026-01-28 09:44:51.286627884 +0000 UTC m=+144.436292257" Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.351849 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:51 crc kubenswrapper[4735]: E0128 09:44:51.354596 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:51.854364932 +0000 UTC m=+145.004029305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.377960 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc"] Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.459096 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:51 crc kubenswrapper[4735]: E0128 09:44:51.459592 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:51.959575032 +0000 UTC m=+145.109239395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.570169 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:51 crc kubenswrapper[4735]: E0128 09:44:51.570884 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:52.070868912 +0000 UTC m=+145.220533285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.679836 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:51 crc kubenswrapper[4735]: E0128 09:44:51.680350 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:52.180332853 +0000 UTC m=+145.329997216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.781573 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:51 crc kubenswrapper[4735]: E0128 09:44:51.782549 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:52.282525506 +0000 UTC m=+145.432189869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.808970 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fzvkk"] Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.842007 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6dsdg"] Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.885865 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:51 crc kubenswrapper[4735]: E0128 09:44:51.886547 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:52.386533215 +0000 UTC m=+145.536197588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.899567 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx4jv"] Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.901695 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2vkwm"] Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.967053 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zghzx" event={"ID":"38416160-6197-4351-89dd-65bc4289993b","Type":"ContainerStarted","Data":"3f2610e095d57025b046c993637f2a11bda72f7fd3e4cd5ffa94be33dfb5a64c"} Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.987095 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lb5dv" event={"ID":"bf964fdc-5a3b-4f58-a73f-63d5af9f456e","Type":"ContainerStarted","Data":"62dcca75c859dd1155d1714ebb0e5a7e2eb3c4f31032f2a5b9d9394238e60123"} Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.988720 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:51 crc kubenswrapper[4735]: E0128 09:44:51.994649 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:52.494622691 +0000 UTC m=+145.644287064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:51 crc kubenswrapper[4735]: I0128 09:44:51.995254 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:51 crc kubenswrapper[4735]: E0128 09:44:51.995541 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:52.495533785 +0000 UTC m=+145.645198158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.034385 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dk28j" event={"ID":"18ae77ef-0b75-46bb-8eda-a99e9f12925c","Type":"ContainerStarted","Data":"821b716d748e85e1db35ea2a5bec1eeb6abad4ff32b060d2c7785e12216fdb44"} Jan 28 09:44:52 crc kubenswrapper[4735]: W0128 09:44:52.036944 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03935bf4_59fa_4f90_9307_450ea6b22983.slice/crio-f5b0f7eece238acea7c6e284a22ff1be23fc1330ebea1f1dcc14736746bce353 WatchSource:0}: Error finding container f5b0f7eece238acea7c6e284a22ff1be23fc1330ebea1f1dcc14736746bce353: Status 404 returned error can't find the container with id f5b0f7eece238acea7c6e284a22ff1be23fc1330ebea1f1dcc14736746bce353 Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.038049 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fm4xr" event={"ID":"79fa6cc2-925b-4ae3-b774-5399ff66fa5b","Type":"ContainerStarted","Data":"274dccf368addad8d70d0ea712f0de41e6236031cb34b0d9677440800f3e8355"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.049158 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2zhvr"] Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.089853 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8" event={"ID":"5693ded5-11d5-4a9e-9c3e-737d02dcec96","Type":"ContainerStarted","Data":"4308953d2c2449c18d1c7f5f1b2f0dfb705879b4aa4f10a30de0fa254daa9219"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.090848 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.095932 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:52 crc kubenswrapper[4735]: E0128 09:44:52.096055 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:52.596032154 +0000 UTC m=+145.745696527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.096506 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:52 crc kubenswrapper[4735]: E0128 09:44:52.099005 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:52.59898118 +0000 UTC m=+145.748645733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.101050 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-zghzx" podStartSLOduration=120.101017162 podStartE2EDuration="2m0.101017162s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:52.086838975 +0000 UTC m=+145.236503358" watchObservedRunningTime="2026-01-28 09:44:52.101017162 +0000 UTC m=+145.250681535" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.132030 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cjnbj"] Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.152130 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2"] Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.160678 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c" event={"ID":"4311a6d0-765a-489b-91da-cb4e23ca2617","Type":"ContainerStarted","Data":"c315406b29dfab2143a1a2feba8dcc48699b61665ed876eafe0ee20f064c6477"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.160728 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c" event={"ID":"4311a6d0-765a-489b-91da-cb4e23ca2617","Type":"ContainerStarted","Data":"079e7ea4d2ebb15eb8befbb8d77c1d3c0094dbdfe8b7d0c2f73ca5a734e49796"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.185473 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fm4xr" podStartSLOduration=6.185456155 podStartE2EDuration="6.185456155s" podCreationTimestamp="2026-01-28 09:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:52.170813904 +0000 UTC m=+145.320478277" watchObservedRunningTime="2026-01-28 09:44:52.185456155 +0000 UTC m=+145.335120528" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.185758 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-djbr5"] Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.199450 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:52 crc kubenswrapper[4735]: E0128 09:44:52.199789 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:52.699773256 +0000 UTC m=+145.849437629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.224045 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pgmcm" event={"ID":"11920696-6788-4b4a-9c6e-cbe4e3026be5","Type":"ContainerStarted","Data":"e31af923dab2a0c000b4ca29298707b9fdfa4a209de3137eb3e6e4e4dd457fd3"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.227221 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-q62td"] Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.288741 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" event={"ID":"f42071fc-81cc-463c-8648-bbf11d175271","Type":"ContainerStarted","Data":"24c5f0d0b0ba7e4809a9859a28cf852fecc314928d0357ed243f8b79c74d3130"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.288832 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" event={"ID":"f42071fc-81cc-463c-8648-bbf11d175271","Type":"ContainerStarted","Data":"edb1b0c9309c5b6fac6649b0867a1d8add6151ac2bf3d20f7ac88e1f457f2fe2"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.302508 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:52 crc kubenswrapper[4735]: E0128 09:44:52.302943 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:52.802924564 +0000 UTC m=+145.952588937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.304668 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxgl"] Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.306905 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" event={"ID":"cb03af8f-fc04-46b0-9f6e-d4700a872288","Type":"ContainerStarted","Data":"f94787735169c85a25051ce77ae805956d86601f367ddb3fb752e8fda91955f4"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.306992 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" event={"ID":"cb03af8f-fc04-46b0-9f6e-d4700a872288","Type":"ContainerStarted","Data":"4b23bc299a72ad942c998226fc82e15d8aec2b25f449ad0f0076d3f99b179b54"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.314352 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tkljg"] Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.321012 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn"] Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.326054 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dk28j" podStartSLOduration=120.326028354 podStartE2EDuration="2m0.326028354s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:52.251373996 +0000 UTC m=+145.401038369" watchObservedRunningTime="2026-01-28 09:44:52.326028354 +0000 UTC m=+145.475692727" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.334802 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm" event={"ID":"8e7b0527-d0de-4bc3-b59d-29b49799cad3","Type":"ContainerStarted","Data":"a3b89d09fe49f67eded50653eb913e63966dfed8a7fac155b8db589aff0e0ff9"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.353711 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gkb25"] Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.353781 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-c95cr"] Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.374627 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8" podStartSLOduration=120.374602455 podStartE2EDuration="2m0.374602455s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:52.354880793 +0000 UTC m=+145.504545196" watchObservedRunningTime="2026-01-28 09:44:52.374602455 +0000 UTC m=+145.524266828" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.375570 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6v6rm"] Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.400017 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz" event={"ID":"9ba71cb2-abfc-47b7-9416-058006a9798a","Type":"ContainerStarted","Data":"e46554deb5fd5115d19fdbd5339ee3c7b46e341e4590ea14f51d68fa1b69d39a"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.400062 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.412554 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:52 crc kubenswrapper[4735]: E0128 09:44:52.414092 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:52.914069329 +0000 UTC m=+146.063733692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.420413 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:52 crc kubenswrapper[4735]: E0128 09:44:52.421175 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:52.921164773 +0000 UTC m=+146.070829146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.433902 4735 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-c6gjz container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.434045 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz" podUID="9ba71cb2-abfc-47b7-9416-058006a9798a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.456622 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-s5d86" event={"ID":"c9efa93c-e7ba-418e-b5f0-bd063e650ee8","Type":"ContainerStarted","Data":"2a879cb577c4e19e2ddb647fb6aca8a7d3ad1f0ddd9f63e55ad932d8a35d747e"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.461688 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-s5d86" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.472813 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-7vrhj" podStartSLOduration=120.472792043 podStartE2EDuration="2m0.472792043s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:52.419952882 +0000 UTC m=+145.569617245" watchObservedRunningTime="2026-01-28 09:44:52.472792043 +0000 UTC m=+145.622456416" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.480165 4735 patch_prober.go:28] interesting pod/console-operator-58897d9998-s5d86 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.480340 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-s5d86" podUID="c9efa93c-e7ba-418e-b5f0-bd063e650ee8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.480722 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6dsdg" event={"ID":"bfc4463e-9f59-4e22-b40d-ebf47cc84969","Type":"ContainerStarted","Data":"a2925c4ebc4a2df87c687ddfef3a470bfad45b4031ce8ea94d053fa403286516"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.527382 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:52 crc kubenswrapper[4735]: E0128 09:44:52.530761 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:53.030718957 +0000 UTC m=+146.180383330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.535083 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-s5d86" podStartSLOduration=120.535046859 podStartE2EDuration="2m0.535046859s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:52.528675054 +0000 UTC m=+145.678339417" watchObservedRunningTime="2026-01-28 09:44:52.535046859 +0000 UTC m=+145.684711232" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.535419 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x6nbm" podStartSLOduration=120.535414029 podStartE2EDuration="2m0.535414029s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:52.484897738 +0000 UTC m=+145.634562111" watchObservedRunningTime="2026-01-28 09:44:52.535414029 +0000 UTC m=+145.685078392" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.543997 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v8stz" event={"ID":"de09e0e4-b898-412b-a745-e53ddf521bb0","Type":"ContainerStarted","Data":"b835b99f7ae6eef2b6406f0cab42884dd41608295f4edb4f3af9c993cf6e59eb"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.562890 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" event={"ID":"fe367f89-eb40-437c-b0e0-c7d9ac7dd376","Type":"ContainerStarted","Data":"981a94bfab54d61d7f4592066d0036807fe95f811e31820645b308655e64240f"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.584050 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" event={"ID":"e0a428da-f914-4463-8baf-899aed0920b1","Type":"ContainerStarted","Data":"36925ff56fd907ec6648e9ce50b3fedd34a9ef644e25027d95eb08e527183dd5"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.586078 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz" podStartSLOduration=120.586050813 podStartE2EDuration="2m0.586050813s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:52.558004845 +0000 UTC m=+145.707669218" watchObservedRunningTime="2026-01-28 09:44:52.586050813 +0000 UTC m=+145.735715186" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.587458 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v8stz" podStartSLOduration=120.58745092 podStartE2EDuration="2m0.58745092s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:52.584498703 +0000 UTC m=+145.734163076" watchObservedRunningTime="2026-01-28 09:44:52.58745092 +0000 UTC m=+145.737115283" Jan 28 09:44:52 crc kubenswrapper[4735]: W0128 09:44:52.603655 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75bce32_7014_4fc3_8721_c8a9f231455f.slice/crio-c0a609805ad7959fc1bc0e739e9118fee8a7ae115d259ec23e0324c99390965e WatchSource:0}: Error finding container c0a609805ad7959fc1bc0e739e9118fee8a7ae115d259ec23e0324c99390965e: Status 404 returned error can't find the container with id c0a609805ad7959fc1bc0e739e9118fee8a7ae115d259ec23e0324c99390965e Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.614308 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" event={"ID":"448247d5-7682-47db-a6cf-60743d53bf92","Type":"ContainerStarted","Data":"9d41ceb418b2970b0bdc1fb057f8665b87fe35358bdbcca7a9f4e21e892398a6"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.631132 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:52 crc kubenswrapper[4735]: E0128 09:44:52.631580 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:53.131563304 +0000 UTC m=+146.281227677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.650971 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" event={"ID":"3c8d118a-3940-4efc-a7f3-16cb8de02990","Type":"ContainerStarted","Data":"861d7d49f1f7d751020d4ea960ec62cde2b0a5b170232dc6734b3f53f95362b5"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.652700 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.669919 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2cdfr" event={"ID":"4eaba190-637d-4895-8a03-de4b806c5062","Type":"ContainerStarted","Data":"0c405355245040fa93b54e89fc3bfad643e34c6812b6cd7099528d0d560eb2ac"} Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.670693 4735 patch_prober.go:28] interesting pod/downloads-7954f5f757-8xlx7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.670731 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8xlx7" podUID="9c6798d5-2a74-4a6f-95dc-184b82db4a25" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.670810 4735 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fhwqk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.670823 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" podUID="3c8d118a-3940-4efc-a7f3-16cb8de02990" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.683458 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" podStartSLOduration=120.683441241 podStartE2EDuration="2m0.683441241s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:52.653216887 +0000 UTC m=+145.802881270" watchObservedRunningTime="2026-01-28 09:44:52.683441241 +0000 UTC m=+145.833105614" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.684549 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" podStartSLOduration=120.68454508 podStartE2EDuration="2m0.68454508s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:52.682295882 +0000 UTC m=+145.831960265" watchObservedRunningTime="2026-01-28 09:44:52.68454508 +0000 UTC m=+145.834209453" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.697081 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.710444 4735 csr.go:261] certificate signing request csr-9qmpt is approved, waiting to be issued Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.716529 4735 csr.go:257] certificate signing request csr-9qmpt is issued Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.720526 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-4qnqk" podStartSLOduration=120.720505963 podStartE2EDuration="2m0.720505963s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:52.718911132 +0000 UTC m=+145.868575505" watchObservedRunningTime="2026-01-28 09:44:52.720505963 +0000 UTC m=+145.870170346" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.722119 4735 patch_prober.go:28] interesting pod/router-default-5444994796-zghzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 09:44:52 crc kubenswrapper[4735]: [-]has-synced failed: reason withheld Jan 28 09:44:52 crc kubenswrapper[4735]: [+]process-running ok Jan 28 09:44:52 crc kubenswrapper[4735]: healthz check failed Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.722171 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zghzx" podUID="38416160-6197-4351-89dd-65bc4289993b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.737618 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:52 crc kubenswrapper[4735]: E0128 09:44:52.738914 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:53.238886241 +0000 UTC m=+146.388550674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.768074 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" podStartSLOduration=120.768043538 podStartE2EDuration="2m0.768043538s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:52.765294386 +0000 UTC m=+145.914958779" watchObservedRunningTime="2026-01-28 09:44:52.768043538 +0000 UTC m=+145.917707911" Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.842104 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:52 crc kubenswrapper[4735]: E0128 09:44:52.842402 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:53.342387527 +0000 UTC m=+146.492051900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:52 crc kubenswrapper[4735]: I0128 09:44:52.944203 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:52 crc kubenswrapper[4735]: E0128 09:44:52.944659 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:53.444634641 +0000 UTC m=+146.594299014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.047468 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:53 crc kubenswrapper[4735]: E0128 09:44:53.048172 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:53.548159369 +0000 UTC m=+146.697823742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.149801 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:53 crc kubenswrapper[4735]: E0128 09:44:53.150025 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:53.649980902 +0000 UTC m=+146.799645275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.150147 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:53 crc kubenswrapper[4735]: E0128 09:44:53.150934 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:53.650927116 +0000 UTC m=+146.800591489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.251226 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:53 crc kubenswrapper[4735]: E0128 09:44:53.251699 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:53.751669811 +0000 UTC m=+146.901334184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.353673 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:53 crc kubenswrapper[4735]: E0128 09:44:53.354147 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:53.85411617 +0000 UTC m=+147.003780543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.430846 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.430898 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.457847 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:53 crc kubenswrapper[4735]: E0128 09:44:53.458391 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:53.958363566 +0000 UTC m=+147.108027949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.559606 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:53 crc kubenswrapper[4735]: E0128 09:44:53.559955 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:54.059942913 +0000 UTC m=+147.209607286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.660661 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:53 crc kubenswrapper[4735]: E0128 09:44:53.661702 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:54.161674134 +0000 UTC m=+147.311338507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.701296 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.705910 4735 patch_prober.go:28] interesting pod/router-default-5444994796-zghzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 09:44:53 crc kubenswrapper[4735]: [-]has-synced failed: reason withheld Jan 28 09:44:53 crc kubenswrapper[4735]: [+]process-running ok Jan 28 09:44:53 crc kubenswrapper[4735]: healthz check failed Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.705965 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zghzx" podUID="38416160-6197-4351-89dd-65bc4289993b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.717878 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-28 09:39:52 +0000 UTC, rotation deadline is 2026-11-08 15:53:38.147079416 +0000 UTC Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.717925 4735 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6822h8m44.429157482s for next certificate rotation Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.764243 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:53 crc kubenswrapper[4735]: E0128 09:44:53.764724 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:54.264711519 +0000 UTC m=+147.414375892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.774969 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lb5dv" event={"ID":"bf964fdc-5a3b-4f58-a73f-63d5af9f456e","Type":"ContainerStarted","Data":"23e36173d9ea27033e0263ffbefecc22f598a9aadcc6e75e1f8f07c7c9b56f9e"} Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.805051 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vkwm" event={"ID":"03935bf4-59fa-4f90-9307-450ea6b22983","Type":"ContainerStarted","Data":"a724a980e98013f178ebea863f09d9b06491cafaf78811e6e4fa2fd9a14b9aa6"} Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.805362 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vkwm" event={"ID":"03935bf4-59fa-4f90-9307-450ea6b22983","Type":"ContainerStarted","Data":"f5b0f7eece238acea7c6e284a22ff1be23fc1330ebea1f1dcc14736746bce353"} Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.841008 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" event={"ID":"f42071fc-81cc-463c-8648-bbf11d175271","Type":"ContainerStarted","Data":"3d3dbc609a54b2f5254a6c8ae7d99727cc7e5ff69db48503e52516b99e9ad0fe"} Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.857971 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-lb5dv" podStartSLOduration=121.857950698 podStartE2EDuration="2m1.857950698s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:53.848621887 +0000 UTC m=+146.998286270" watchObservedRunningTime="2026-01-28 09:44:53.857950698 +0000 UTC m=+147.007615071" Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.867176 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:53 crc kubenswrapper[4735]: E0128 09:44:53.875713 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:54.375690029 +0000 UTC m=+147.525354402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.875864 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:53 crc kubenswrapper[4735]: E0128 09:44:53.876127 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:54.37611765 +0000 UTC m=+147.525782023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.877448 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx4jv" event={"ID":"868b9120-1a06-4845-8e31-4f9bbb6d5a76","Type":"ContainerStarted","Data":"9e9b783c0ff936c24dbc1d417c30c04448ae0dd73cd1c2048021d229d0aee296"} Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.877513 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx4jv" event={"ID":"868b9120-1a06-4845-8e31-4f9bbb6d5a76","Type":"ContainerStarted","Data":"00766447a7577d081ab5c11801e9171a75a3acd1e02155f7cf089ebe096c4ea6"} Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.877551 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx4jv" event={"ID":"868b9120-1a06-4845-8e31-4f9bbb6d5a76","Type":"ContainerStarted","Data":"6b849d680b6de134182a12d0d4541ddf2bd5cca8d09c3531d182961bdf90e04c"} Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.878630 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx4jv" Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.901327 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" event={"ID":"c2380f3e-65de-4df8-b674-a4c9b16d7218","Type":"ContainerStarted","Data":"dbd98f1c821cd222cae02e6fedfa99ecb4dbe7c96eec9c4ebf7ff590adecd981"} Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.901399 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" event={"ID":"c2380f3e-65de-4df8-b674-a4c9b16d7218","Type":"ContainerStarted","Data":"a823654c2200a64ae4682dbb372d43ff7ec0870aa6de5ed79d38be4d5f130890"} Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.902887 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.902886 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2vkwm" podStartSLOduration=121.902861654 podStartE2EDuration="2m1.902861654s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:53.901771976 +0000 UTC m=+147.051436349" watchObservedRunningTime="2026-01-28 09:44:53.902861654 +0000 UTC m=+147.052526027" Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.922953 4735 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vrdhn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.923316 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" podUID="c2380f3e-65de-4df8-b674-a4c9b16d7218" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.965012 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-c95cr" event={"ID":"a093e5d2-7fd4-488a-86ef-61bf9ae18728","Type":"ContainerStarted","Data":"a9657315a1cda9d7799b68bde8b2860a7e356d37d76b4ca517d101f14d04b4d2"} Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.982013 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:53 crc kubenswrapper[4735]: E0128 09:44:53.982442 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:54.482411079 +0000 UTC m=+147.632075452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:53 crc kubenswrapper[4735]: I0128 09:44:53.982782 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:53 crc kubenswrapper[4735]: E0128 09:44:53.984472 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:54.484453373 +0000 UTC m=+147.634117746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.011352 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fzvkk" event={"ID":"432a29bc-0436-4d07-bb40-25bb08020746","Type":"ContainerStarted","Data":"2b2ce87eeed68bb793e651031a779d753e0d502c4f01140d247a325864f8089c"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.011418 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fzvkk" event={"ID":"432a29bc-0436-4d07-bb40-25bb08020746","Type":"ContainerStarted","Data":"d5893abcd9125c374e827e91a605b8564d654dcee6dcaa0b497a4f3e2a3cb02d"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.057725 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pgmcm" event={"ID":"11920696-6788-4b4a-9c6e-cbe4e3026be5","Type":"ContainerStarted","Data":"e3ffde61a2509c3af64bacdb61a61f4dd05f56f8c0b1ec0e3dd2edb2a33bdaad"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.060781 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxgl" event={"ID":"5ff879b4-32b1-430b-bf3c-9f020c76ba0f","Type":"ContainerStarted","Data":"57ba02d9c390b614b7330276f4e9b9d47401e2fb9c8a1cd211d9eb49e147fac5"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.062818 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vjbc" podStartSLOduration=122.062806927 podStartE2EDuration="2m2.062806927s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:53.952337129 +0000 UTC m=+147.102001502" watchObservedRunningTime="2026-01-28 09:44:54.062806927 +0000 UTC m=+147.212471300" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.063237 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx4jv" podStartSLOduration=122.063230387 podStartE2EDuration="2m2.063230387s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:54.057278863 +0000 UTC m=+147.206943246" watchObservedRunningTime="2026-01-28 09:44:54.063230387 +0000 UTC m=+147.212894760" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.084992 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:54 crc kubenswrapper[4735]: E0128 09:44:54.085157 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:54.585136895 +0000 UTC m=+147.734801268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.085381 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:54 crc kubenswrapper[4735]: E0128 09:44:54.086474 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:54.58645632 +0000 UTC m=+147.736120693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.091526 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6v6rm" event={"ID":"d75bce32-7014-4fc3-8721-c8a9f231455f","Type":"ContainerStarted","Data":"afa8eb329524beb44844862059b96650beea48f6b6383c80b84cb3ff6b0ca4c7"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.091588 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6v6rm" event={"ID":"d75bce32-7014-4fc3-8721-c8a9f231455f","Type":"ContainerStarted","Data":"c0a609805ad7959fc1bc0e739e9118fee8a7ae115d259ec23e0324c99390965e"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.095849 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2cdfr" event={"ID":"4eaba190-637d-4895-8a03-de4b806c5062","Type":"ContainerStarted","Data":"2fbcce0b00671fb5dec162391b538bee712a89cd66c2d346e10d7eb9da7b8bd1"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.130373 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6dsdg" event={"ID":"bfc4463e-9f59-4e22-b40d-ebf47cc84969","Type":"ContainerStarted","Data":"7233ecf6a42789da22c2a6d79e493c9f9e00511fb083c1ff059984c317eb7fee"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.157688 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cjnbj" event={"ID":"267eadc3-080c-430c-82f2-0c64c71f1ea9","Type":"ContainerStarted","Data":"788340c9f55545ae28cc21cb1f8a87334cc517b4abe80b33e14a0b7375b0b007"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.157734 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cjnbj" event={"ID":"267eadc3-080c-430c-82f2-0c64c71f1ea9","Type":"ContainerStarted","Data":"7aad8ba6920c52261e8dbdd0055041baedc518fbf4321c10945f66fd89efebe4"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.168096 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" event={"ID":"3c8d118a-3940-4efc-a7f3-16cb8de02990","Type":"ContainerStarted","Data":"393a8412db035ac085bd7cc5440518a3da52729c0582895be7a332f6b4147270"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.193200 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" podStartSLOduration=122.193179461 podStartE2EDuration="2m2.193179461s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:54.130545045 +0000 UTC m=+147.280209438" watchObservedRunningTime="2026-01-28 09:44:54.193179461 +0000 UTC m=+147.342843834" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.195208 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:54 crc kubenswrapper[4735]: E0128 09:44:54.196669 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:54.69665181 +0000 UTC m=+147.846316183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.241736 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c" event={"ID":"4311a6d0-765a-489b-91da-cb4e23ca2617","Type":"ContainerStarted","Data":"c69841e8b2288f58cd22626117f7975cfc9eccf6d9f34c885bf4eece04fab58b"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.271291 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fzvkk" podStartSLOduration=122.271270647 podStartE2EDuration="2m2.271270647s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:54.212221044 +0000 UTC m=+147.361885417" watchObservedRunningTime="2026-01-28 09:44:54.271270647 +0000 UTC m=+147.420935020" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.272332 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pgmcm" podStartSLOduration=122.272322554 podStartE2EDuration="2m2.272322554s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:54.269899171 +0000 UTC m=+147.419563544" watchObservedRunningTime="2026-01-28 09:44:54.272322554 +0000 UTC m=+147.421986937" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.299818 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:54 crc kubenswrapper[4735]: E0128 09:44:54.301568 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:54.801531353 +0000 UTC m=+147.951195726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.307648 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2cdfr" podStartSLOduration=122.307629361 podStartE2EDuration="2m2.307629361s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:54.30680396 +0000 UTC m=+147.456468353" watchObservedRunningTime="2026-01-28 09:44:54.307629361 +0000 UTC m=+147.457293734" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.309155 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" event={"ID":"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa","Type":"ContainerStarted","Data":"e76d9bcc7d840a7c255c309b1e86523644a67b721a85e90b556bfb8d8d1bd198"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.309178 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" event={"ID":"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa","Type":"ContainerStarted","Data":"cde43b99a43760f8ffc18de62ed169f781305499b2a47c17564850cbd46173e1"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.310490 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.318472 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2" event={"ID":"7df03872-d3ee-4701-832a-1aaf25d48f15","Type":"ContainerStarted","Data":"dd30e45df841e5c5d5e30d7251b9e0059722790ce1658ae52ed159ec9bb20c93"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.318518 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2" event={"ID":"7df03872-d3ee-4701-832a-1aaf25d48f15","Type":"ContainerStarted","Data":"6728a1e90f02475441807c60ac8e779b1069ab2b416930bc338e805ce30bbc8e"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.319294 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.338947 4735 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vbfc2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.339014 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2" podUID="7df03872-d3ee-4701-832a-1aaf25d48f15" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.339368 4735 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tkljg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.339388 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" podUID="d9c6bba7-8f28-4afe-baa8-f070b3f0acfa" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.361427 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" event={"ID":"b4340feb-9a36-45f1-8a67-8df7b8040cf0","Type":"ContainerStarted","Data":"3c40f79d6554c8a43e26255a67d5b9bff61087aee36fa30c8c812c48a4c89fd2"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.366836 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxgl" podStartSLOduration=122.366820838 podStartE2EDuration="2m2.366820838s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:54.366285973 +0000 UTC m=+147.515950346" watchObservedRunningTime="2026-01-28 09:44:54.366820838 +0000 UTC m=+147.516485201" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.393065 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" event={"ID":"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769","Type":"ContainerStarted","Data":"6c4f2c057f0ccb92ba2d6ad05e08db3426a777d7d3c91d405106a33870dfb50a"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.393614 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" event={"ID":"0e5842b2-66a0-4bd1-a7b0-b1fdf27b1769","Type":"ContainerStarted","Data":"8d044fc353cfd6653c697f5d7216c66265571f0dd7c85a9d9da5f46702767fac"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.401385 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:54 crc kubenswrapper[4735]: E0128 09:44:54.403385 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:54.903362706 +0000 UTC m=+148.053027079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.405911 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-djbr5" event={"ID":"4eae6a81-70da-45dd-88f9-bd270294b1a0","Type":"ContainerStarted","Data":"3bed06fe38816730753d24ec18edece9526bb2020e4b603df85d0d18bba89b8d"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.405952 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-djbr5" event={"ID":"4eae6a81-70da-45dd-88f9-bd270294b1a0","Type":"ContainerStarted","Data":"2b52f1fb398fcd7fd7e86690087c8de7295cd47f3f5e810ea2281ce9bc3b0070"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.417167 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-6v6rm" podStartSLOduration=8.417143584 podStartE2EDuration="8.417143584s" podCreationTimestamp="2026-01-28 09:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:54.416145568 +0000 UTC m=+147.565809951" watchObservedRunningTime="2026-01-28 09:44:54.417143584 +0000 UTC m=+147.566807957" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.424029 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q62td" event={"ID":"af4b340f-065b-432e-a0be-d19647b019cb","Type":"ContainerStarted","Data":"9911114c3a52b5c337853c753930ae382662a78015610010363ac5202d4d843a"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.424093 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q62td" event={"ID":"af4b340f-065b-432e-a0be-d19647b019cb","Type":"ContainerStarted","Data":"2b4501a25c078569bd1e91e29d95a2150761b6c1e415ff99a72253eeae9c7d3c"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.451670 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gkb25" event={"ID":"139d3ad6-aaa8-4510-b5d1-6284f8433959","Type":"ContainerStarted","Data":"e5490a4d3f33f7f0ffdea9257aea37f541c63ccf38aa7dbad34ff3799054d278"} Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.460953 4735 patch_prober.go:28] interesting pod/downloads-7954f5f757-8xlx7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.461046 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8xlx7" podUID="9c6798d5-2a74-4a6f-95dc-184b82db4a25" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.481496 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vpwj9" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.495849 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-c6gjz" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.495823 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-6dsdg" podStartSLOduration=122.495793516 podStartE2EDuration="2m2.495793516s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:54.473535448 +0000 UTC m=+147.623199831" watchObservedRunningTime="2026-01-28 09:44:54.495793516 +0000 UTC m=+147.645457889" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.505858 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:54 crc kubenswrapper[4735]: E0128 09:44:54.513877 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:55.013861555 +0000 UTC m=+148.163525928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.517692 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gkb25" podStartSLOduration=122.517669633 podStartE2EDuration="2m2.517669633s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:54.517279323 +0000 UTC m=+147.666943696" watchObservedRunningTime="2026-01-28 09:44:54.517669633 +0000 UTC m=+147.667333996" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.544611 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4z4vz"] Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.546262 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z4vz" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.550152 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.571671 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4z4vz"] Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.582556 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qdv5c" podStartSLOduration=122.582538787 podStartE2EDuration="2m2.582538787s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:54.571281955 +0000 UTC m=+147.720946328" watchObservedRunningTime="2026-01-28 09:44:54.582538787 +0000 UTC m=+147.732203150" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.610841 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.611065 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.611096 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/707fe5c2-87d6-4636-9270-b3295925cefd-utilities\") pod \"community-operators-4z4vz\" (UID: \"707fe5c2-87d6-4636-9270-b3295925cefd\") " pod="openshift-marketplace/community-operators-4z4vz" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.611137 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.611156 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.611179 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/707fe5c2-87d6-4636-9270-b3295925cefd-catalog-content\") pod \"community-operators-4z4vz\" (UID: \"707fe5c2-87d6-4636-9270-b3295925cefd\") " pod="openshift-marketplace/community-operators-4z4vz" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.611230 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.611292 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzxk2\" (UniqueName: \"kubernetes.io/projected/707fe5c2-87d6-4636-9270-b3295925cefd-kube-api-access-lzxk2\") pod \"community-operators-4z4vz\" (UID: \"707fe5c2-87d6-4636-9270-b3295925cefd\") " pod="openshift-marketplace/community-operators-4z4vz" Jan 28 09:44:54 crc kubenswrapper[4735]: E0128 09:44:54.612412 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:55.112389712 +0000 UTC m=+148.262054075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.623682 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.626563 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.632526 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" podStartSLOduration=122.632507544 podStartE2EDuration="2m2.632507544s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:54.623661094 +0000 UTC m=+147.773325467" watchObservedRunningTime="2026-01-28 09:44:54.632507544 +0000 UTC m=+147.782171917" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.636479 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.641219 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.714000 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2" podStartSLOduration=122.713984319 podStartE2EDuration="2m2.713984319s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:54.709475972 +0000 UTC m=+147.859140345" watchObservedRunningTime="2026-01-28 09:44:54.713984319 +0000 UTC m=+147.863648692" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.721765 4735 patch_prober.go:28] interesting pod/router-default-5444994796-zghzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 09:44:54 crc kubenswrapper[4735]: [-]has-synced failed: reason withheld Jan 28 09:44:54 crc kubenswrapper[4735]: [+]process-running ok Jan 28 09:44:54 crc kubenswrapper[4735]: healthz check failed Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.722314 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zghzx" podUID="38416160-6197-4351-89dd-65bc4289993b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.722681 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.722729 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/707fe5c2-87d6-4636-9270-b3295925cefd-catalog-content\") pod \"community-operators-4z4vz\" (UID: \"707fe5c2-87d6-4636-9270-b3295925cefd\") " pod="openshift-marketplace/community-operators-4z4vz" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.722807 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzxk2\" (UniqueName: \"kubernetes.io/projected/707fe5c2-87d6-4636-9270-b3295925cefd-kube-api-access-lzxk2\") pod \"community-operators-4z4vz\" (UID: \"707fe5c2-87d6-4636-9270-b3295925cefd\") " pod="openshift-marketplace/community-operators-4z4vz" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.722846 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/707fe5c2-87d6-4636-9270-b3295925cefd-utilities\") pod \"community-operators-4z4vz\" (UID: \"707fe5c2-87d6-4636-9270-b3295925cefd\") " pod="openshift-marketplace/community-operators-4z4vz" Jan 28 09:44:54 crc kubenswrapper[4735]: E0128 09:44:54.723004 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:55.222990983 +0000 UTC m=+148.372655346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.722799 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.723348 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/707fe5c2-87d6-4636-9270-b3295925cefd-catalog-content\") pod \"community-operators-4z4vz\" (UID: \"707fe5c2-87d6-4636-9270-b3295925cefd\") " pod="openshift-marketplace/community-operators-4z4vz" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.726156 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/707fe5c2-87d6-4636-9270-b3295925cefd-utilities\") pod \"community-operators-4z4vz\" (UID: \"707fe5c2-87d6-4636-9270-b3295925cefd\") " pod="openshift-marketplace/community-operators-4z4vz" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.740039 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.745025 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.781696 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" podStartSLOduration=122.781671756 podStartE2EDuration="2m2.781671756s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:54.77951613 +0000 UTC m=+147.929180503" watchObservedRunningTime="2026-01-28 09:44:54.781671756 +0000 UTC m=+147.931336129" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.797394 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzxk2\" (UniqueName: \"kubernetes.io/projected/707fe5c2-87d6-4636-9270-b3295925cefd-kube-api-access-lzxk2\") pod \"community-operators-4z4vz\" (UID: \"707fe5c2-87d6-4636-9270-b3295925cefd\") " pod="openshift-marketplace/community-operators-4z4vz" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.821230 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-djbr5" podStartSLOduration=122.821208802 podStartE2EDuration="2m2.821208802s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:54.819228901 +0000 UTC m=+147.968893274" watchObservedRunningTime="2026-01-28 09:44:54.821208802 +0000 UTC m=+147.970873185" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.823681 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:54 crc kubenswrapper[4735]: E0128 09:44:54.824036 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:55.324021465 +0000 UTC m=+148.473685838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.882174 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-s5d86" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.916449 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z4vz" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.923063 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-cjnbj" podStartSLOduration=122.923028805 podStartE2EDuration="2m2.923028805s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:54.882241227 +0000 UTC m=+148.031905600" watchObservedRunningTime="2026-01-28 09:44:54.923028805 +0000 UTC m=+148.072693188" Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.929494 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:54 crc kubenswrapper[4735]: E0128 09:44:54.930278 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:55.430264333 +0000 UTC m=+148.579928706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.962576 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-skpv7"] Jan 28 09:44:54 crc kubenswrapper[4735]: I0128 09:44:54.965098 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-skpv7" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.015470 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-skpv7"] Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.031457 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.031848 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgwn7\" (UniqueName: \"kubernetes.io/projected/777b99db-3c0c-444d-ad0c-60e58359f6ad-kube-api-access-kgwn7\") pod \"community-operators-skpv7\" (UID: \"777b99db-3c0c-444d-ad0c-60e58359f6ad\") " pod="openshift-marketplace/community-operators-skpv7" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.031874 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777b99db-3c0c-444d-ad0c-60e58359f6ad-catalog-content\") pod \"community-operators-skpv7\" (UID: \"777b99db-3c0c-444d-ad0c-60e58359f6ad\") " pod="openshift-marketplace/community-operators-skpv7" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.031899 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777b99db-3c0c-444d-ad0c-60e58359f6ad-utilities\") pod \"community-operators-skpv7\" (UID: \"777b99db-3c0c-444d-ad0c-60e58359f6ad\") " pod="openshift-marketplace/community-operators-skpv7" Jan 28 09:44:55 crc kubenswrapper[4735]: E0128 09:44:55.032047 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:55.532029155 +0000 UTC m=+148.681693528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.133340 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.133435 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgwn7\" (UniqueName: \"kubernetes.io/projected/777b99db-3c0c-444d-ad0c-60e58359f6ad-kube-api-access-kgwn7\") pod \"community-operators-skpv7\" (UID: \"777b99db-3c0c-444d-ad0c-60e58359f6ad\") " pod="openshift-marketplace/community-operators-skpv7" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.133468 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777b99db-3c0c-444d-ad0c-60e58359f6ad-catalog-content\") pod \"community-operators-skpv7\" (UID: \"777b99db-3c0c-444d-ad0c-60e58359f6ad\") " pod="openshift-marketplace/community-operators-skpv7" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.133492 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777b99db-3c0c-444d-ad0c-60e58359f6ad-utilities\") pod \"community-operators-skpv7\" (UID: \"777b99db-3c0c-444d-ad0c-60e58359f6ad\") " pod="openshift-marketplace/community-operators-skpv7" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.134079 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777b99db-3c0c-444d-ad0c-60e58359f6ad-utilities\") pod \"community-operators-skpv7\" (UID: \"777b99db-3c0c-444d-ad0c-60e58359f6ad\") " pod="openshift-marketplace/community-operators-skpv7" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.134303 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777b99db-3c0c-444d-ad0c-60e58359f6ad-catalog-content\") pod \"community-operators-skpv7\" (UID: \"777b99db-3c0c-444d-ad0c-60e58359f6ad\") " pod="openshift-marketplace/community-operators-skpv7" Jan 28 09:44:55 crc kubenswrapper[4735]: E0128 09:44:55.134576 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:55.634557566 +0000 UTC m=+148.784221939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.137186 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lcz6p"] Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.138412 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcz6p" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.153123 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lcz6p"] Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.175858 4735 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fhwqk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.175928 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" podUID="3c8d118a-3940-4efc-a7f3-16cb8de02990" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.177272 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.191243 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgwn7\" (UniqueName: \"kubernetes.io/projected/777b99db-3c0c-444d-ad0c-60e58359f6ad-kube-api-access-kgwn7\") pod \"community-operators-skpv7\" (UID: \"777b99db-3c0c-444d-ad0c-60e58359f6ad\") " pod="openshift-marketplace/community-operators-skpv7" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.235120 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.235812 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9f4r\" (UniqueName: \"kubernetes.io/projected/b4ba59eb-a71b-467f-a035-9824e10a7ffe-kube-api-access-h9f4r\") pod \"certified-operators-lcz6p\" (UID: \"b4ba59eb-a71b-467f-a035-9824e10a7ffe\") " pod="openshift-marketplace/certified-operators-lcz6p" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.235861 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ba59eb-a71b-467f-a035-9824e10a7ffe-catalog-content\") pod \"certified-operators-lcz6p\" (UID: \"b4ba59eb-a71b-467f-a035-9824e10a7ffe\") " pod="openshift-marketplace/certified-operators-lcz6p" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.235887 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ba59eb-a71b-467f-a035-9824e10a7ffe-utilities\") pod \"certified-operators-lcz6p\" (UID: \"b4ba59eb-a71b-467f-a035-9824e10a7ffe\") " pod="openshift-marketplace/certified-operators-lcz6p" Jan 28 09:44:55 crc kubenswrapper[4735]: E0128 09:44:55.235982 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:55.735966948 +0000 UTC m=+148.885631321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.338575 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9f4r\" (UniqueName: \"kubernetes.io/projected/b4ba59eb-a71b-467f-a035-9824e10a7ffe-kube-api-access-h9f4r\") pod \"certified-operators-lcz6p\" (UID: \"b4ba59eb-a71b-467f-a035-9824e10a7ffe\") " pod="openshift-marketplace/certified-operators-lcz6p" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.338639 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ba59eb-a71b-467f-a035-9824e10a7ffe-catalog-content\") pod \"certified-operators-lcz6p\" (UID: \"b4ba59eb-a71b-467f-a035-9824e10a7ffe\") " pod="openshift-marketplace/certified-operators-lcz6p" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.338666 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ba59eb-a71b-467f-a035-9824e10a7ffe-utilities\") pod \"certified-operators-lcz6p\" (UID: \"b4ba59eb-a71b-467f-a035-9824e10a7ffe\") " pod="openshift-marketplace/certified-operators-lcz6p" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.338688 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:55 crc kubenswrapper[4735]: E0128 09:44:55.339129 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:55.839111816 +0000 UTC m=+148.988776189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.339219 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ba59eb-a71b-467f-a035-9824e10a7ffe-catalog-content\") pod \"certified-operators-lcz6p\" (UID: \"b4ba59eb-a71b-467f-a035-9824e10a7ffe\") " pod="openshift-marketplace/certified-operators-lcz6p" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.339720 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ba59eb-a71b-467f-a035-9824e10a7ffe-utilities\") pod \"certified-operators-lcz6p\" (UID: \"b4ba59eb-a71b-467f-a035-9824e10a7ffe\") " pod="openshift-marketplace/certified-operators-lcz6p" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.345235 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rrpjd"] Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.346290 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrpjd" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.350073 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-skpv7" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.376779 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9f4r\" (UniqueName: \"kubernetes.io/projected/b4ba59eb-a71b-467f-a035-9824e10a7ffe-kube-api-access-h9f4r\") pod \"certified-operators-lcz6p\" (UID: \"b4ba59eb-a71b-467f-a035-9824e10a7ffe\") " pod="openshift-marketplace/certified-operators-lcz6p" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.403196 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rrpjd"] Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.407818 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vjfr8" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.440338 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:55 crc kubenswrapper[4735]: E0128 09:44:55.440596 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:55.940531359 +0000 UTC m=+149.090195742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.440632 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5deece60-d2e0-46c9-9c66-d70a01722ba6-catalog-content\") pod \"certified-operators-rrpjd\" (UID: \"5deece60-d2e0-46c9-9c66-d70a01722ba6\") " pod="openshift-marketplace/certified-operators-rrpjd" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.440672 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwmms\" (UniqueName: \"kubernetes.io/projected/5deece60-d2e0-46c9-9c66-d70a01722ba6-kube-api-access-bwmms\") pod \"certified-operators-rrpjd\" (UID: \"5deece60-d2e0-46c9-9c66-d70a01722ba6\") " pod="openshift-marketplace/certified-operators-rrpjd" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.440708 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5deece60-d2e0-46c9-9c66-d70a01722ba6-utilities\") pod \"certified-operators-rrpjd\" (UID: \"5deece60-d2e0-46c9-9c66-d70a01722ba6\") " pod="openshift-marketplace/certified-operators-rrpjd" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.441076 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:55 crc kubenswrapper[4735]: E0128 09:44:55.443445 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:55.943427734 +0000 UTC m=+149.093092107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.528783 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcz6p" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.536316 4735 generic.go:334] "Generic (PLEG): container finished" podID="e0a428da-f914-4463-8baf-899aed0920b1" containerID="36925ff56fd907ec6648e9ce50b3fedd34a9ef644e25027d95eb08e527183dd5" exitCode=0 Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.536418 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" event={"ID":"e0a428da-f914-4463-8baf-899aed0920b1","Type":"ContainerDied","Data":"36925ff56fd907ec6648e9ce50b3fedd34a9ef644e25027d95eb08e527183dd5"} Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.543770 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.544113 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwmms\" (UniqueName: \"kubernetes.io/projected/5deece60-d2e0-46c9-9c66-d70a01722ba6-kube-api-access-bwmms\") pod \"certified-operators-rrpjd\" (UID: \"5deece60-d2e0-46c9-9c66-d70a01722ba6\") " pod="openshift-marketplace/certified-operators-rrpjd" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.544165 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5deece60-d2e0-46c9-9c66-d70a01722ba6-utilities\") pod \"certified-operators-rrpjd\" (UID: \"5deece60-d2e0-46c9-9c66-d70a01722ba6\") " pod="openshift-marketplace/certified-operators-rrpjd" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.544243 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5deece60-d2e0-46c9-9c66-d70a01722ba6-catalog-content\") pod \"certified-operators-rrpjd\" (UID: \"5deece60-d2e0-46c9-9c66-d70a01722ba6\") " pod="openshift-marketplace/certified-operators-rrpjd" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.544701 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5deece60-d2e0-46c9-9c66-d70a01722ba6-catalog-content\") pod \"certified-operators-rrpjd\" (UID: \"5deece60-d2e0-46c9-9c66-d70a01722ba6\") " pod="openshift-marketplace/certified-operators-rrpjd" Jan 28 09:44:55 crc kubenswrapper[4735]: E0128 09:44:55.544858 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:56.044789155 +0000 UTC m=+149.194453528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.545563 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5deece60-d2e0-46c9-9c66-d70a01722ba6-utilities\") pod \"certified-operators-rrpjd\" (UID: \"5deece60-d2e0-46c9-9c66-d70a01722ba6\") " pod="openshift-marketplace/certified-operators-rrpjd" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.574447 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gkb25" event={"ID":"139d3ad6-aaa8-4510-b5d1-6284f8433959","Type":"ContainerStarted","Data":"05cc34c98a61a3023a03c4079c1c5007831c32a854fdcb936e5bbe3fca0f3dca"} Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.574506 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gkb25" event={"ID":"139d3ad6-aaa8-4510-b5d1-6284f8433959","Type":"ContainerStarted","Data":"d79c36d5be29d1feba7e4b0508ec1ff81fae61fa85b7cd47b1acafa97967bcfe"} Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.605862 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwmms\" (UniqueName: \"kubernetes.io/projected/5deece60-d2e0-46c9-9c66-d70a01722ba6-kube-api-access-bwmms\") pod \"certified-operators-rrpjd\" (UID: \"5deece60-d2e0-46c9-9c66-d70a01722ba6\") " pod="openshift-marketplace/certified-operators-rrpjd" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.621093 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxgl" event={"ID":"5ff879b4-32b1-430b-bf3c-9f020c76ba0f","Type":"ContainerStarted","Data":"a8c26769cb1c1d9d66bfe433c479154806c3872c4ebc843ca811ad5d956e8d87"} Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.656183 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:55 crc kubenswrapper[4735]: E0128 09:44:55.656962 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:56.156949916 +0000 UTC m=+149.306614289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.684087 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" event={"ID":"b4340feb-9a36-45f1-8a67-8df7b8040cf0","Type":"ContainerStarted","Data":"6d5f801d37f0786c11ee014be301a084dc52489e470a58d61e9dbf9ca169caf8"} Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.722297 4735 patch_prober.go:28] interesting pod/router-default-5444994796-zghzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 09:44:55 crc kubenswrapper[4735]: [-]has-synced failed: reason withheld Jan 28 09:44:55 crc kubenswrapper[4735]: [+]process-running ok Jan 28 09:44:55 crc kubenswrapper[4735]: healthz check failed Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.722355 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zghzx" podUID="38416160-6197-4351-89dd-65bc4289993b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.723242 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-c95cr" event={"ID":"a093e5d2-7fd4-488a-86ef-61bf9ae18728","Type":"ContainerStarted","Data":"54625459d6c5a8600bfd78e14e6609eb34d2be25cd60323c2d807dad9a4e9902"} Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.723276 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-c95cr" event={"ID":"a093e5d2-7fd4-488a-86ef-61bf9ae18728","Type":"ContainerStarted","Data":"33116374fa5107d5a95f992e89b15b0249a6cb689cf78b9dc7c2985008f533ee"} Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.727541 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q62td" event={"ID":"af4b340f-065b-432e-a0be-d19647b019cb","Type":"ContainerStarted","Data":"6c9152b9c962f944c3a79fe1703e88942cea782319e3beee8e763c7cf6259c1e"} Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.727734 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-q62td" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.729778 4735 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tkljg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.729903 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" podUID="d9c6bba7-8f28-4afe-baa8-f070b3f0acfa" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.739486 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrpjd" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.758879 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-c95cr" podStartSLOduration=123.758860451 podStartE2EDuration="2m3.758860451s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:55.757591008 +0000 UTC m=+148.907255381" watchObservedRunningTime="2026-01-28 09:44:55.758860451 +0000 UTC m=+148.908524824" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.762033 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:55 crc kubenswrapper[4735]: E0128 09:44:55.764944 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:56.264911438 +0000 UTC m=+149.414575811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.766669 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbfc2" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.868382 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:55 crc kubenswrapper[4735]: E0128 09:44:55.871508 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:56.371477754 +0000 UTC m=+149.521142127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.881066 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-q62td" podStartSLOduration=9.881035943 podStartE2EDuration="9.881035943s" podCreationTimestamp="2026-01-28 09:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:55.828199201 +0000 UTC m=+148.977863604" watchObservedRunningTime="2026-01-28 09:44:55.881035943 +0000 UTC m=+149.030700316" Jan 28 09:44:55 crc kubenswrapper[4735]: I0128 09:44:55.970430 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:55 crc kubenswrapper[4735]: E0128 09:44:55.971475 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:56.47144645 +0000 UTC m=+149.621110823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.055076 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.074664 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:56 crc kubenswrapper[4735]: E0128 09:44:56.075449 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:56.575426959 +0000 UTC m=+149.725091332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.176652 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:56 crc kubenswrapper[4735]: E0128 09:44:56.177259 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:56.677235212 +0000 UTC m=+149.826899575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.278312 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:56 crc kubenswrapper[4735]: E0128 09:44:56.278697 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:56.778673314 +0000 UTC m=+149.928337687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.308553 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4z4vz"] Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.381330 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:56 crc kubenswrapper[4735]: E0128 09:44:56.381797 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:56.881781351 +0000 UTC m=+150.031445724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.466337 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-skpv7"] Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.482733 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:56 crc kubenswrapper[4735]: E0128 09:44:56.483226 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:56.983205283 +0000 UTC m=+150.132869656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.586333 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:56 crc kubenswrapper[4735]: E0128 09:44:56.586671 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:57.086652729 +0000 UTC m=+150.236317102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.693137 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:56 crc kubenswrapper[4735]: E0128 09:44:56.695369 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:57.19534816 +0000 UTC m=+150.345012533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.695696 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kzl24"] Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.696796 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzl24" Jan 28 09:44:56 crc kubenswrapper[4735]: W0128 09:44:56.698726 4735 reflector.go:561] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb": failed to list *v1.Secret: secrets "redhat-marketplace-dockercfg-x2ctb" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 28 09:44:56 crc kubenswrapper[4735]: E0128 09:44:56.698814 4735 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-x2ctb\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"redhat-marketplace-dockercfg-x2ctb\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.705179 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lcz6p"] Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.712303 4735 patch_prober.go:28] interesting pod/router-default-5444994796-zghzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 09:44:56 crc kubenswrapper[4735]: [-]has-synced failed: reason withheld Jan 28 09:44:56 crc kubenswrapper[4735]: [+]process-running ok Jan 28 09:44:56 crc kubenswrapper[4735]: healthz check failed Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.712372 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zghzx" podUID="38416160-6197-4351-89dd-65bc4289993b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.732480 4735 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vrdhn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.732557 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" podUID="c2380f3e-65de-4df8-b674-a4c9b16d7218" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.781392 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rrpjd"] Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.781441 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzl24"] Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.796689 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.796924 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7b59344-a72d-41c4-a8af-405990047117-utilities\") pod \"redhat-marketplace-kzl24\" (UID: \"b7b59344-a72d-41c4-a8af-405990047117\") " pod="openshift-marketplace/redhat-marketplace-kzl24" Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.797029 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fxkv\" (UniqueName: \"kubernetes.io/projected/b7b59344-a72d-41c4-a8af-405990047117-kube-api-access-5fxkv\") pod \"redhat-marketplace-kzl24\" (UID: \"b7b59344-a72d-41c4-a8af-405990047117\") " pod="openshift-marketplace/redhat-marketplace-kzl24" Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.797094 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7b59344-a72d-41c4-a8af-405990047117-catalog-content\") pod \"redhat-marketplace-kzl24\" (UID: \"b7b59344-a72d-41c4-a8af-405990047117\") " pod="openshift-marketplace/redhat-marketplace-kzl24" Jan 28 09:44:56 crc kubenswrapper[4735]: E0128 09:44:56.797281 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:57.297235465 +0000 UTC m=+150.446899838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.810120 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4d52880ab8f2c4f1ef7060c0d6e1b7ad277934258cfe783539d6c02294cba535"} Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.810164 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"460097bdf81cbd8ac54d5eca8efa9fe8ef564627e69d060a85387be36423405a"} Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.818422 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skpv7" event={"ID":"777b99db-3c0c-444d-ad0c-60e58359f6ad","Type":"ContainerStarted","Data":"43868c3ae11d62e09f8febfd2c4b48ae9cc6ca62a3a92e9aeb8ef88016590d62"} Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.862555 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z4vz" event={"ID":"707fe5c2-87d6-4636-9270-b3295925cefd","Type":"ContainerStarted","Data":"1b504004b25102ec1a6d18bd6fa5bf75456bf92e9e2c073c4c15c9380721a118"} Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.899915 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7b59344-a72d-41c4-a8af-405990047117-utilities\") pod \"redhat-marketplace-kzl24\" (UID: \"b7b59344-a72d-41c4-a8af-405990047117\") " pod="openshift-marketplace/redhat-marketplace-kzl24" Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.900406 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7b59344-a72d-41c4-a8af-405990047117-utilities\") pod \"redhat-marketplace-kzl24\" (UID: \"b7b59344-a72d-41c4-a8af-405990047117\") " pod="openshift-marketplace/redhat-marketplace-kzl24" Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.900616 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.900704 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fxkv\" (UniqueName: \"kubernetes.io/projected/b7b59344-a72d-41c4-a8af-405990047117-kube-api-access-5fxkv\") pod \"redhat-marketplace-kzl24\" (UID: \"b7b59344-a72d-41c4-a8af-405990047117\") " pod="openshift-marketplace/redhat-marketplace-kzl24" Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.900733 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7b59344-a72d-41c4-a8af-405990047117-catalog-content\") pod \"redhat-marketplace-kzl24\" (UID: \"b7b59344-a72d-41c4-a8af-405990047117\") " pod="openshift-marketplace/redhat-marketplace-kzl24" Jan 28 09:44:56 crc kubenswrapper[4735]: E0128 09:44:56.902319 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:57.402305273 +0000 UTC m=+150.551969646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.902933 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7b59344-a72d-41c4-a8af-405990047117-catalog-content\") pod \"redhat-marketplace-kzl24\" (UID: \"b7b59344-a72d-41c4-a8af-405990047117\") " pod="openshift-marketplace/redhat-marketplace-kzl24" Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.944678 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fxkv\" (UniqueName: \"kubernetes.io/projected/b7b59344-a72d-41c4-a8af-405990047117-kube-api-access-5fxkv\") pod \"redhat-marketplace-kzl24\" (UID: \"b7b59344-a72d-41c4-a8af-405990047117\") " pod="openshift-marketplace/redhat-marketplace-kzl24" Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.945684 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4ee23e8488834b12dfd42d688ab9c198c997310f8080767fce0278a7398d64cf"} Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.951053 4735 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tkljg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 28 09:44:56 crc kubenswrapper[4735]: I0128 09:44:56.951087 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" podUID="d9c6bba7-8f28-4afe-baa8-f070b3f0acfa" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.001870 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:57 crc kubenswrapper[4735]: E0128 09:44:57.003208 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:57.503189471 +0000 UTC m=+150.652853834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.003375 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:57 crc kubenswrapper[4735]: E0128 09:44:57.006500 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:57.506478206 +0000 UTC m=+150.656142589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.091551 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5zwwh"] Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.092936 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zwwh" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.108027 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:57 crc kubenswrapper[4735]: E0128 09:44:57.108525 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:57.608502635 +0000 UTC m=+150.758167008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.117955 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zwwh"] Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.209857 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa60863-d388-433c-aa18-a2e7c4078c9a-catalog-content\") pod \"redhat-marketplace-5zwwh\" (UID: \"ffa60863-d388-433c-aa18-a2e7c4078c9a\") " pod="openshift-marketplace/redhat-marketplace-5zwwh" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.210267 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa60863-d388-433c-aa18-a2e7c4078c9a-utilities\") pod \"redhat-marketplace-5zwwh\" (UID: \"ffa60863-d388-433c-aa18-a2e7c4078c9a\") " pod="openshift-marketplace/redhat-marketplace-5zwwh" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.210298 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9df7f\" (UniqueName: \"kubernetes.io/projected/ffa60863-d388-433c-aa18-a2e7c4078c9a-kube-api-access-9df7f\") pod \"redhat-marketplace-5zwwh\" (UID: \"ffa60863-d388-433c-aa18-a2e7c4078c9a\") " pod="openshift-marketplace/redhat-marketplace-5zwwh" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.210361 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:57 crc kubenswrapper[4735]: E0128 09:44:57.210677 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:57.710662947 +0000 UTC m=+150.860327310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.301073 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vrdhn" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.311600 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.311896 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9df7f\" (UniqueName: \"kubernetes.io/projected/ffa60863-d388-433c-aa18-a2e7c4078c9a-kube-api-access-9df7f\") pod \"redhat-marketplace-5zwwh\" (UID: \"ffa60863-d388-433c-aa18-a2e7c4078c9a\") " pod="openshift-marketplace/redhat-marketplace-5zwwh" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.311991 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa60863-d388-433c-aa18-a2e7c4078c9a-catalog-content\") pod \"redhat-marketplace-5zwwh\" (UID: \"ffa60863-d388-433c-aa18-a2e7c4078c9a\") " pod="openshift-marketplace/redhat-marketplace-5zwwh" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.312024 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa60863-d388-433c-aa18-a2e7c4078c9a-utilities\") pod \"redhat-marketplace-5zwwh\" (UID: \"ffa60863-d388-433c-aa18-a2e7c4078c9a\") " pod="openshift-marketplace/redhat-marketplace-5zwwh" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.312449 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa60863-d388-433c-aa18-a2e7c4078c9a-utilities\") pod \"redhat-marketplace-5zwwh\" (UID: \"ffa60863-d388-433c-aa18-a2e7c4078c9a\") " pod="openshift-marketplace/redhat-marketplace-5zwwh" Jan 28 09:44:57 crc kubenswrapper[4735]: E0128 09:44:57.312590 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:57.812562962 +0000 UTC m=+150.962227335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.312666 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa60863-d388-433c-aa18-a2e7c4078c9a-catalog-content\") pod \"redhat-marketplace-5zwwh\" (UID: \"ffa60863-d388-433c-aa18-a2e7c4078c9a\") " pod="openshift-marketplace/redhat-marketplace-5zwwh" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.360858 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9df7f\" (UniqueName: \"kubernetes.io/projected/ffa60863-d388-433c-aa18-a2e7c4078c9a-kube-api-access-9df7f\") pod \"redhat-marketplace-5zwwh\" (UID: \"ffa60863-d388-433c-aa18-a2e7c4078c9a\") " pod="openshift-marketplace/redhat-marketplace-5zwwh" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.414276 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:57 crc kubenswrapper[4735]: E0128 09:44:57.414630 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:57.91461772 +0000 UTC m=+151.064282093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.495100 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.515329 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0a428da-f914-4463-8baf-899aed0920b1-config-volume\") pod \"e0a428da-f914-4463-8baf-899aed0920b1\" (UID: \"e0a428da-f914-4463-8baf-899aed0920b1\") " Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.515377 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9rqk\" (UniqueName: \"kubernetes.io/projected/e0a428da-f914-4463-8baf-899aed0920b1-kube-api-access-j9rqk\") pod \"e0a428da-f914-4463-8baf-899aed0920b1\" (UID: \"e0a428da-f914-4463-8baf-899aed0920b1\") " Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.515510 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.515541 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0a428da-f914-4463-8baf-899aed0920b1-secret-volume\") pod \"e0a428da-f914-4463-8baf-899aed0920b1\" (UID: \"e0a428da-f914-4463-8baf-899aed0920b1\") " Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.518647 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a428da-f914-4463-8baf-899aed0920b1-config-volume" (OuterVolumeSpecName: "config-volume") pod "e0a428da-f914-4463-8baf-899aed0920b1" (UID: "e0a428da-f914-4463-8baf-899aed0920b1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:44:57 crc kubenswrapper[4735]: E0128 09:44:57.520995 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:58.020970221 +0000 UTC m=+151.170634594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.524247 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a428da-f914-4463-8baf-899aed0920b1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e0a428da-f914-4463-8baf-899aed0920b1" (UID: "e0a428da-f914-4463-8baf-899aed0920b1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.533556 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0a428da-f914-4463-8baf-899aed0920b1-kube-api-access-j9rqk" (OuterVolumeSpecName: "kube-api-access-j9rqk") pod "e0a428da-f914-4463-8baf-899aed0920b1" (UID: "e0a428da-f914-4463-8baf-899aed0920b1"). InnerVolumeSpecName "kube-api-access-j9rqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.618720 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.618822 4735 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0a428da-f914-4463-8baf-899aed0920b1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.618833 4735 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0a428da-f914-4463-8baf-899aed0920b1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.618844 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9rqk\" (UniqueName: \"kubernetes.io/projected/e0a428da-f914-4463-8baf-899aed0920b1-kube-api-access-j9rqk\") on node \"crc\" DevicePath \"\"" Jan 28 09:44:57 crc kubenswrapper[4735]: E0128 09:44:57.619180 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:58.119162051 +0000 UTC m=+151.268826424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.711432 4735 patch_prober.go:28] interesting pod/router-default-5444994796-zghzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 09:44:57 crc kubenswrapper[4735]: [-]has-synced failed: reason withheld Jan 28 09:44:57 crc kubenswrapper[4735]: [+]process-running ok Jan 28 09:44:57 crc kubenswrapper[4735]: healthz check failed Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.711975 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zghzx" podUID="38416160-6197-4351-89dd-65bc4289993b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.719520 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:57 crc kubenswrapper[4735]: E0128 09:44:57.720035 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:58.220016279 +0000 UTC m=+151.369680652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.764006 4735 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.767509 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.769137 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzl24" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.775378 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zwwh" Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.821514 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:57 crc kubenswrapper[4735]: E0128 09:44:57.822156 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:58.322133088 +0000 UTC m=+151.471797461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.922326 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:57 crc kubenswrapper[4735]: E0128 09:44:57.922710 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:58.422694809 +0000 UTC m=+151.572359182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.967411 4735 generic.go:334] "Generic (PLEG): container finished" podID="707fe5c2-87d6-4636-9270-b3295925cefd" containerID="57572d76ee966148655b99e7d7ae294e67abfa7f6e93fbacd067878cb46f2d06" exitCode=0 Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.967489 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z4vz" event={"ID":"707fe5c2-87d6-4636-9270-b3295925cefd","Type":"ContainerDied","Data":"57572d76ee966148655b99e7d7ae294e67abfa7f6e93fbacd067878cb46f2d06"} Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.972519 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 09:44:57 crc kubenswrapper[4735]: I0128 09:44:57.984066 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d36b9e3a6fbeeff55a7a12a19cf562b4f9f3b6f4260b754459472b1c0f5230fe"} Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.000009 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ccc93bef783b37194a87c58c824045f9bddcb6e2e134e69da3e4c0e6320ad839"} Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.000068 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"64dee70e6fa96dc49586ebbc731d103ce4b0c03f20eacaafdbe7a8e9bce71216"} Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.000551 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.024261 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:58 crc kubenswrapper[4735]: E0128 09:44:58.024732 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:58.524716947 +0000 UTC m=+151.674381320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.025579 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" event={"ID":"b4340feb-9a36-45f1-8a67-8df7b8040cf0","Type":"ContainerStarted","Data":"0c118d66d630d5cdf34cea27c614a74b9dacee77e20602a3fbf639716302c1fa"} Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.025623 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" event={"ID":"b4340feb-9a36-45f1-8a67-8df7b8040cf0","Type":"ContainerStarted","Data":"884cd41adfa157de44558dc51391896e9a11dafbca83f9f4e4563c5188168220"} Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.045431 4735 generic.go:334] "Generic (PLEG): container finished" podID="5deece60-d2e0-46c9-9c66-d70a01722ba6" containerID="b86bd64315e20cc65c2a81663b5862330fb1add9a7ee5bd24f152f05201e49bf" exitCode=0 Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.045550 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrpjd" event={"ID":"5deece60-d2e0-46c9-9c66-d70a01722ba6","Type":"ContainerDied","Data":"b86bd64315e20cc65c2a81663b5862330fb1add9a7ee5bd24f152f05201e49bf"} Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.045637 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrpjd" event={"ID":"5deece60-d2e0-46c9-9c66-d70a01722ba6","Type":"ContainerStarted","Data":"4160d9190b1ffbfeadacb2f41a5441e1f6341ffcf38741cbd0cd58f7fd6d10bb"} Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.061561 4735 generic.go:334] "Generic (PLEG): container finished" podID="777b99db-3c0c-444d-ad0c-60e58359f6ad" containerID="107ce3b3582f6a1a8c3b1149844427f8c3b5b2258f129c3f5b2fb2a3ac5eb94b" exitCode=0 Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.061738 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skpv7" event={"ID":"777b99db-3c0c-444d-ad0c-60e58359f6ad","Type":"ContainerDied","Data":"107ce3b3582f6a1a8c3b1149844427f8c3b5b2258f129c3f5b2fb2a3ac5eb94b"} Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.071447 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" event={"ID":"e0a428da-f914-4463-8baf-899aed0920b1","Type":"ContainerDied","Data":"1cc6909e587297bf820483d8c014d6d8f2bb083fe4385d850cc3e67edea5c8ac"} Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.071495 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cc6909e587297bf820483d8c014d6d8f2bb083fe4385d850cc3e67edea5c8ac" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.071602 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.106604 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6ckmr"] Jan 28 09:44:58 crc kubenswrapper[4735]: E0128 09:44:58.112888 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a428da-f914-4463-8baf-899aed0920b1" containerName="collect-profiles" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.112917 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a428da-f914-4463-8baf-899aed0920b1" containerName="collect-profiles" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.113075 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a428da-f914-4463-8baf-899aed0920b1" containerName="collect-profiles" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.113865 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ckmr" Jan 28 09:44:58 crc kubenswrapper[4735]: W0128 09:44:58.120341 4735 reflector.go:561] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": failed to list *v1.Secret: secrets "redhat-operators-dockercfg-ct8rh" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 28 09:44:58 crc kubenswrapper[4735]: E0128 09:44:58.120403 4735 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-ct8rh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"redhat-operators-dockercfg-ct8rh\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.125350 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:58 crc kubenswrapper[4735]: E0128 09:44:58.126645 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:58.626608592 +0000 UTC m=+151.776272965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.139235 4735 generic.go:334] "Generic (PLEG): container finished" podID="b4ba59eb-a71b-467f-a035-9824e10a7ffe" containerID="cf14fd03aec6bded0a770d8a3a29aa12928af2f2375dd0f4711e4a436803c6b4" exitCode=0 Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.140331 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcz6p" event={"ID":"b4ba59eb-a71b-467f-a035-9824e10a7ffe","Type":"ContainerDied","Data":"cf14fd03aec6bded0a770d8a3a29aa12928af2f2375dd0f4711e4a436803c6b4"} Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.140361 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcz6p" event={"ID":"b4ba59eb-a71b-467f-a035-9824e10a7ffe","Type":"ContainerStarted","Data":"0400549099c42634d1a0be71d97d17aedf4533f6512d1fd453c98292c954a999"} Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.145810 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ckmr"] Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.226946 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-utilities\") pod \"redhat-operators-6ckmr\" (UID: \"297a2cc5-ec84-4fd0-9bf1-25ccc5062045\") " pod="openshift-marketplace/redhat-operators-6ckmr" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.226983 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.227061 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-catalog-content\") pod \"redhat-operators-6ckmr\" (UID: \"297a2cc5-ec84-4fd0-9bf1-25ccc5062045\") " pod="openshift-marketplace/redhat-operators-6ckmr" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.227112 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mks7\" (UniqueName: \"kubernetes.io/projected/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-kube-api-access-2mks7\") pod \"redhat-operators-6ckmr\" (UID: \"297a2cc5-ec84-4fd0-9bf1-25ccc5062045\") " pod="openshift-marketplace/redhat-operators-6ckmr" Jan 28 09:44:58 crc kubenswrapper[4735]: E0128 09:44:58.228158 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:58.728145347 +0000 UTC m=+151.877809720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.332001 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.332785 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mks7\" (UniqueName: \"kubernetes.io/projected/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-kube-api-access-2mks7\") pod \"redhat-operators-6ckmr\" (UID: \"297a2cc5-ec84-4fd0-9bf1-25ccc5062045\") " pod="openshift-marketplace/redhat-operators-6ckmr" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.332832 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-utilities\") pod \"redhat-operators-6ckmr\" (UID: \"297a2cc5-ec84-4fd0-9bf1-25ccc5062045\") " pod="openshift-marketplace/redhat-operators-6ckmr" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.332898 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-catalog-content\") pod \"redhat-operators-6ckmr\" (UID: \"297a2cc5-ec84-4fd0-9bf1-25ccc5062045\") " pod="openshift-marketplace/redhat-operators-6ckmr" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.333370 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-catalog-content\") pod \"redhat-operators-6ckmr\" (UID: \"297a2cc5-ec84-4fd0-9bf1-25ccc5062045\") " pod="openshift-marketplace/redhat-operators-6ckmr" Jan 28 09:44:58 crc kubenswrapper[4735]: E0128 09:44:58.333466 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:58.833444831 +0000 UTC m=+151.983109204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.334051 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-utilities\") pod \"redhat-operators-6ckmr\" (UID: \"297a2cc5-ec84-4fd0-9bf1-25ccc5062045\") " pod="openshift-marketplace/redhat-operators-6ckmr" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.340790 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzl24"] Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.373114 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mks7\" (UniqueName: \"kubernetes.io/projected/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-kube-api-access-2mks7\") pod \"redhat-operators-6ckmr\" (UID: \"297a2cc5-ec84-4fd0-9bf1-25ccc5062045\") " pod="openshift-marketplace/redhat-operators-6ckmr" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.431660 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zwwh"] Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.437535 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:58 crc kubenswrapper[4735]: E0128 09:44:58.437977 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:58.937962014 +0000 UTC m=+152.087626387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:58 crc kubenswrapper[4735]: W0128 09:44:58.453848 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffa60863_d388_433c_aa18_a2e7c4078c9a.slice/crio-581575e3772a9c3e7cf9ff2662d4cb36f5321de34608a5c350c00407157cc50b WatchSource:0}: Error finding container 581575e3772a9c3e7cf9ff2662d4cb36f5321de34608a5c350c00407157cc50b: Status 404 returned error can't find the container with id 581575e3772a9c3e7cf9ff2662d4cb36f5321de34608a5c350c00407157cc50b Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.509724 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xzcx4"] Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.512447 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.512481 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.512593 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xzcx4" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.517602 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xzcx4"] Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.536233 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.551670 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.552081 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a150b50e-b4b0-40a5-8947-aabedfc6fa74-catalog-content\") pod \"redhat-operators-xzcx4\" (UID: \"a150b50e-b4b0-40a5-8947-aabedfc6fa74\") " pod="openshift-marketplace/redhat-operators-xzcx4" Jan 28 09:44:58 crc kubenswrapper[4735]: E0128 09:44:58.552236 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:59.05220452 +0000 UTC m=+152.201868893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.552442 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a150b50e-b4b0-40a5-8947-aabedfc6fa74-utilities\") pod \"redhat-operators-xzcx4\" (UID: \"a150b50e-b4b0-40a5-8947-aabedfc6fa74\") " pod="openshift-marketplace/redhat-operators-xzcx4" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.552480 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gncvk\" (UniqueName: \"kubernetes.io/projected/a150b50e-b4b0-40a5-8947-aabedfc6fa74-kube-api-access-gncvk\") pod \"redhat-operators-xzcx4\" (UID: \"a150b50e-b4b0-40a5-8947-aabedfc6fa74\") " pod="openshift-marketplace/redhat-operators-xzcx4" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.552513 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:58 crc kubenswrapper[4735]: E0128 09:44:58.552921 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:59.052904537 +0000 UTC m=+152.202568910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.580428 4735 patch_prober.go:28] interesting pod/downloads-7954f5f757-8xlx7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.580780 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8xlx7" podUID="9c6798d5-2a74-4a6f-95dc-184b82db4a25" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.581150 4735 patch_prober.go:28] interesting pod/downloads-7954f5f757-8xlx7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.581166 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8xlx7" podUID="9c6798d5-2a74-4a6f-95dc-184b82db4a25" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.654618 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:58 crc kubenswrapper[4735]: E0128 09:44:58.654846 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 09:44:59.154814773 +0000 UTC m=+152.304479146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.655170 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a150b50e-b4b0-40a5-8947-aabedfc6fa74-catalog-content\") pod \"redhat-operators-xzcx4\" (UID: \"a150b50e-b4b0-40a5-8947-aabedfc6fa74\") " pod="openshift-marketplace/redhat-operators-xzcx4" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.655222 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a150b50e-b4b0-40a5-8947-aabedfc6fa74-utilities\") pod \"redhat-operators-xzcx4\" (UID: \"a150b50e-b4b0-40a5-8947-aabedfc6fa74\") " pod="openshift-marketplace/redhat-operators-xzcx4" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.655245 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gncvk\" (UniqueName: \"kubernetes.io/projected/a150b50e-b4b0-40a5-8947-aabedfc6fa74-kube-api-access-gncvk\") pod \"redhat-operators-xzcx4\" (UID: \"a150b50e-b4b0-40a5-8947-aabedfc6fa74\") " pod="openshift-marketplace/redhat-operators-xzcx4" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.655305 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.656894 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a150b50e-b4b0-40a5-8947-aabedfc6fa74-utilities\") pod \"redhat-operators-xzcx4\" (UID: \"a150b50e-b4b0-40a5-8947-aabedfc6fa74\") " pod="openshift-marketplace/redhat-operators-xzcx4" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.657137 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a150b50e-b4b0-40a5-8947-aabedfc6fa74-catalog-content\") pod \"redhat-operators-xzcx4\" (UID: \"a150b50e-b4b0-40a5-8947-aabedfc6fa74\") " pod="openshift-marketplace/redhat-operators-xzcx4" Jan 28 09:44:58 crc kubenswrapper[4735]: E0128 09:44:58.657519 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 09:44:59.157510003 +0000 UTC m=+152.307174376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85zqb" (UID: "a005ba34-6174-4e23-9c80-aea87194684a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.681324 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gncvk\" (UniqueName: \"kubernetes.io/projected/a150b50e-b4b0-40a5-8947-aabedfc6fa74-kube-api-access-gncvk\") pod \"redhat-operators-xzcx4\" (UID: \"a150b50e-b4b0-40a5-8947-aabedfc6fa74\") " pod="openshift-marketplace/redhat-operators-xzcx4" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.704309 4735 patch_prober.go:28] interesting pod/router-default-5444994796-zghzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 09:44:58 crc kubenswrapper[4735]: [-]has-synced failed: reason withheld Jan 28 09:44:58 crc kubenswrapper[4735]: [+]process-running ok Jan 28 09:44:58 crc kubenswrapper[4735]: healthz check failed Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.704424 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zghzx" podUID="38416160-6197-4351-89dd-65bc4289993b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.724146 4735 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-28T09:44:57.764041671Z","Handler":null,"Name":""} Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.728942 4735 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.728998 4735 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.756898 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.761560 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.864662 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.869220 4735 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.869311 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:58 crc kubenswrapper[4735]: I0128 09:44:58.910706 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85zqb\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.118838 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.147707 4735 generic.go:334] "Generic (PLEG): container finished" podID="b7b59344-a72d-41c4-a8af-405990047117" containerID="0b093f13f005a1db0955fe85d528a2d29f5137fdc1483af61c55234bcc9773e1" exitCode=0 Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.147807 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzl24" event={"ID":"b7b59344-a72d-41c4-a8af-405990047117","Type":"ContainerDied","Data":"0b093f13f005a1db0955fe85d528a2d29f5137fdc1483af61c55234bcc9773e1"} Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.147842 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzl24" event={"ID":"b7b59344-a72d-41c4-a8af-405990047117","Type":"ContainerStarted","Data":"926b4313ecf6a5f740c5ca75c35392e08a46e4ff7e91224c7251c0d103679acb"} Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.161409 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" event={"ID":"b4340feb-9a36-45f1-8a67-8df7b8040cf0","Type":"ContainerStarted","Data":"0d117b2cea6357ccc1b41052fc459be3331bb081ce49b20dd3be0b16b91ff43d"} Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.163868 4735 generic.go:334] "Generic (PLEG): container finished" podID="ffa60863-d388-433c-aa18-a2e7c4078c9a" containerID="27b7267ec88c74e2d5896b2047a8cf69e062916ca65e24f9c60dfbe87a969695" exitCode=0 Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.164694 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zwwh" event={"ID":"ffa60863-d388-433c-aa18-a2e7c4078c9a","Type":"ContainerDied","Data":"27b7267ec88c74e2d5896b2047a8cf69e062916ca65e24f9c60dfbe87a969695"} Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.164732 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zwwh" event={"ID":"ffa60863-d388-433c-aa18-a2e7c4078c9a","Type":"ContainerStarted","Data":"581575e3772a9c3e7cf9ff2662d4cb36f5321de34608a5c350c00407157cc50b"} Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.174869 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-9mmgl" Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.198508 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.198575 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.205937 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-2zhvr" podStartSLOduration=13.205909778 podStartE2EDuration="13.205909778s" podCreationTimestamp="2026-01-28 09:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:44:59.20098458 +0000 UTC m=+152.350648973" watchObservedRunningTime="2026-01-28 09:44:59.205909778 +0000 UTC m=+152.355574151" Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.206473 4735 patch_prober.go:28] interesting pod/console-f9d7485db-6dsdg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.206531 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6dsdg" podUID="bfc4463e-9f59-4e22-b40d-ebf47cc84969" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.447357 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-85zqb"] Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.480049 4735 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/redhat-operators-6ckmr" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.480132 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ckmr" Jan 28 09:44:59 crc kubenswrapper[4735]: W0128 09:44:59.496339 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda005ba34_6174_4e23_9c80_aea87194684a.slice/crio-05fc8be13c40e6958e75cacb8049613a60e6af457fcee50a70f76a7f9ab637aa WatchSource:0}: Error finding container 05fc8be13c40e6958e75cacb8049613a60e6af457fcee50a70f76a7f9ab637aa: Status 404 returned error can't find the container with id 05fc8be13c40e6958e75cacb8049613a60e6af457fcee50a70f76a7f9ab637aa Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.522908 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.666976 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.670492 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xzcx4" Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.676085 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.697823 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.703377 4735 patch_prober.go:28] interesting pod/router-default-5444994796-zghzx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 09:44:59 crc kubenswrapper[4735]: [-]has-synced failed: reason withheld Jan 28 09:44:59 crc kubenswrapper[4735]: [+]process-running ok Jan 28 09:44:59 crc kubenswrapper[4735]: healthz check failed Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.703428 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zghzx" podUID="38416160-6197-4351-89dd-65bc4289993b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.883738 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.884717 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.893216 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.894780 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 28 09:44:59 crc kubenswrapper[4735]: I0128 09:44:59.896399 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.019369 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31b1a123-567a-45d6-8e90-d7dd406d545b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"31b1a123-567a-45d6-8e90-d7dd406d545b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.019791 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31b1a123-567a-45d6-8e90-d7dd406d545b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"31b1a123-567a-45d6-8e90-d7dd406d545b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.068579 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ckmr"] Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.121943 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31b1a123-567a-45d6-8e90-d7dd406d545b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"31b1a123-567a-45d6-8e90-d7dd406d545b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.122038 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31b1a123-567a-45d6-8e90-d7dd406d545b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"31b1a123-567a-45d6-8e90-d7dd406d545b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.122236 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31b1a123-567a-45d6-8e90-d7dd406d545b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"31b1a123-567a-45d6-8e90-d7dd406d545b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.134040 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp"] Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.134842 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.137146 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.144870 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.151421 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp"] Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.152523 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31b1a123-567a-45d6-8e90-d7dd406d545b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"31b1a123-567a-45d6-8e90-d7dd406d545b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.178323 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ckmr" event={"ID":"297a2cc5-ec84-4fd0-9bf1-25ccc5062045","Type":"ContainerStarted","Data":"e11b72d7cc2c46c62acc8c28d898c6dc3573e1138aba414e3ba439e20721ccbc"} Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.185732 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" event={"ID":"a005ba34-6174-4e23-9c80-aea87194684a","Type":"ContainerStarted","Data":"7f1db70e4ed124e567ba9354df23223059072afb208d94f6aa00a149b99fd563"} Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.185852 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" event={"ID":"a005ba34-6174-4e23-9c80-aea87194684a","Type":"ContainerStarted","Data":"05fc8be13c40e6958e75cacb8049613a60e6af457fcee50a70f76a7f9ab637aa"} Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.186228 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.212514 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" podStartSLOduration=128.212490387 podStartE2EDuration="2m8.212490387s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:45:00.208804631 +0000 UTC m=+153.358469004" watchObservedRunningTime="2026-01-28 09:45:00.212490387 +0000 UTC m=+153.362154760" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.223868 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff4ml\" (UniqueName: \"kubernetes.io/projected/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-kube-api-access-ff4ml\") pod \"collect-profiles-29493225-l7hlp\" (UID: \"83849342-c2fb-4a4e-a7cb-c94dc2c02b77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.223989 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-secret-volume\") pod \"collect-profiles-29493225-l7hlp\" (UID: \"83849342-c2fb-4a4e-a7cb-c94dc2c02b77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.224135 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-config-volume\") pod \"collect-profiles-29493225-l7hlp\" (UID: \"83849342-c2fb-4a4e-a7cb-c94dc2c02b77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.224334 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.241660 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xzcx4"] Jan 28 09:45:00 crc kubenswrapper[4735]: W0128 09:45:00.276904 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda150b50e_b4b0_40a5_8947_aabedfc6fa74.slice/crio-36795afd2f682d85c584560cf5322e95ac89f77b4d7dba0bafc6d004f9966ed2 WatchSource:0}: Error finding container 36795afd2f682d85c584560cf5322e95ac89f77b4d7dba0bafc6d004f9966ed2: Status 404 returned error can't find the container with id 36795afd2f682d85c584560cf5322e95ac89f77b4d7dba0bafc6d004f9966ed2 Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.325838 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-secret-volume\") pod \"collect-profiles-29493225-l7hlp\" (UID: \"83849342-c2fb-4a4e-a7cb-c94dc2c02b77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.325981 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-config-volume\") pod \"collect-profiles-29493225-l7hlp\" (UID: \"83849342-c2fb-4a4e-a7cb-c94dc2c02b77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.326304 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff4ml\" (UniqueName: \"kubernetes.io/projected/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-kube-api-access-ff4ml\") pod \"collect-profiles-29493225-l7hlp\" (UID: \"83849342-c2fb-4a4e-a7cb-c94dc2c02b77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.329358 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-config-volume\") pod \"collect-profiles-29493225-l7hlp\" (UID: \"83849342-c2fb-4a4e-a7cb-c94dc2c02b77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.343717 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-secret-volume\") pod \"collect-profiles-29493225-l7hlp\" (UID: \"83849342-c2fb-4a4e-a7cb-c94dc2c02b77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.346281 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff4ml\" (UniqueName: \"kubernetes.io/projected/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-kube-api-access-ff4ml\") pod \"collect-profiles-29493225-l7hlp\" (UID: \"83849342-c2fb-4a4e-a7cb-c94dc2c02b77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.451669 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.712081 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.721257 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-zghzx" Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.880063 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 09:45:00 crc kubenswrapper[4735]: I0128 09:45:00.911309 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp"] Jan 28 09:45:00 crc kubenswrapper[4735]: W0128 09:45:00.943672 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod31b1a123_567a_45d6_8e90_d7dd406d545b.slice/crio-527b081ddef60d20eccba5817b40ef72eb0f2c1b81edbfc9163bd00e17e63a0f WatchSource:0}: Error finding container 527b081ddef60d20eccba5817b40ef72eb0f2c1b81edbfc9163bd00e17e63a0f: Status 404 returned error can't find the container with id 527b081ddef60d20eccba5817b40ef72eb0f2c1b81edbfc9163bd00e17e63a0f Jan 28 09:45:01 crc kubenswrapper[4735]: I0128 09:45:01.220369 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"31b1a123-567a-45d6-8e90-d7dd406d545b","Type":"ContainerStarted","Data":"527b081ddef60d20eccba5817b40ef72eb0f2c1b81edbfc9163bd00e17e63a0f"} Jan 28 09:45:01 crc kubenswrapper[4735]: I0128 09:45:01.222694 4735 generic.go:334] "Generic (PLEG): container finished" podID="297a2cc5-ec84-4fd0-9bf1-25ccc5062045" containerID="11318ff14ea551ad2f5fd52a534c3982eff2b1cf42003004205a5251451c8479" exitCode=0 Jan 28 09:45:01 crc kubenswrapper[4735]: I0128 09:45:01.222835 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ckmr" event={"ID":"297a2cc5-ec84-4fd0-9bf1-25ccc5062045","Type":"ContainerDied","Data":"11318ff14ea551ad2f5fd52a534c3982eff2b1cf42003004205a5251451c8479"} Jan 28 09:45:01 crc kubenswrapper[4735]: I0128 09:45:01.233564 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp" event={"ID":"83849342-c2fb-4a4e-a7cb-c94dc2c02b77","Type":"ContainerStarted","Data":"98592127dffc2c12edf31ada66c148141fd57f8c41666fc73ccad26b03012180"} Jan 28 09:45:01 crc kubenswrapper[4735]: I0128 09:45:01.264943 4735 generic.go:334] "Generic (PLEG): container finished" podID="a150b50e-b4b0-40a5-8947-aabedfc6fa74" containerID="ac589df831c31ff1a44b32403fc0631fd3d98acef218a1281c52e81609a55083" exitCode=0 Jan 28 09:45:01 crc kubenswrapper[4735]: I0128 09:45:01.266228 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzcx4" event={"ID":"a150b50e-b4b0-40a5-8947-aabedfc6fa74","Type":"ContainerDied","Data":"ac589df831c31ff1a44b32403fc0631fd3d98acef218a1281c52e81609a55083"} Jan 28 09:45:01 crc kubenswrapper[4735]: I0128 09:45:01.266253 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzcx4" event={"ID":"a150b50e-b4b0-40a5-8947-aabedfc6fa74","Type":"ContainerStarted","Data":"36795afd2f682d85c584560cf5322e95ac89f77b4d7dba0bafc6d004f9966ed2"} Jan 28 09:45:03 crc kubenswrapper[4735]: I0128 09:45:03.324272 4735 generic.go:334] "Generic (PLEG): container finished" podID="31b1a123-567a-45d6-8e90-d7dd406d545b" containerID="4cba09deba7701fa8e483223ea408eef5fb6d94611ffdc1019f3b81e6910a456" exitCode=0 Jan 28 09:45:03 crc kubenswrapper[4735]: I0128 09:45:03.324495 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"31b1a123-567a-45d6-8e90-d7dd406d545b","Type":"ContainerDied","Data":"4cba09deba7701fa8e483223ea408eef5fb6d94611ffdc1019f3b81e6910a456"} Jan 28 09:45:03 crc kubenswrapper[4735]: I0128 09:45:03.370564 4735 generic.go:334] "Generic (PLEG): container finished" podID="83849342-c2fb-4a4e-a7cb-c94dc2c02b77" containerID="3304399f014f57829b756999b8b4ab7ae44d5724705a25165e88bbb5944b58c3" exitCode=0 Jan 28 09:45:03 crc kubenswrapper[4735]: I0128 09:45:03.370643 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp" event={"ID":"83849342-c2fb-4a4e-a7cb-c94dc2c02b77","Type":"ContainerDied","Data":"3304399f014f57829b756999b8b4ab7ae44d5724705a25165e88bbb5944b58c3"} Jan 28 09:45:03 crc kubenswrapper[4735]: I0128 09:45:03.466349 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 09:45:03 crc kubenswrapper[4735]: I0128 09:45:03.468417 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 09:45:03 crc kubenswrapper[4735]: I0128 09:45:03.471254 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 09:45:03 crc kubenswrapper[4735]: I0128 09:45:03.473012 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 09:45:03 crc kubenswrapper[4735]: I0128 09:45:03.481534 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 09:45:03 crc kubenswrapper[4735]: I0128 09:45:03.508940 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35b5ea39-714a-4d8b-97c0-64bc375fcd3e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"35b5ea39-714a-4d8b-97c0-64bc375fcd3e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 09:45:03 crc kubenswrapper[4735]: I0128 09:45:03.509029 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35b5ea39-714a-4d8b-97c0-64bc375fcd3e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"35b5ea39-714a-4d8b-97c0-64bc375fcd3e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 09:45:03 crc kubenswrapper[4735]: I0128 09:45:03.611464 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35b5ea39-714a-4d8b-97c0-64bc375fcd3e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"35b5ea39-714a-4d8b-97c0-64bc375fcd3e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 09:45:03 crc kubenswrapper[4735]: I0128 09:45:03.611624 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35b5ea39-714a-4d8b-97c0-64bc375fcd3e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"35b5ea39-714a-4d8b-97c0-64bc375fcd3e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 09:45:03 crc kubenswrapper[4735]: I0128 09:45:03.612467 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35b5ea39-714a-4d8b-97c0-64bc375fcd3e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"35b5ea39-714a-4d8b-97c0-64bc375fcd3e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 09:45:03 crc kubenswrapper[4735]: I0128 09:45:03.646033 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35b5ea39-714a-4d8b-97c0-64bc375fcd3e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"35b5ea39-714a-4d8b-97c0-64bc375fcd3e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 09:45:03 crc kubenswrapper[4735]: I0128 09:45:03.809129 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 09:45:04 crc kubenswrapper[4735]: I0128 09:45:04.493834 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-q62td" Jan 28 09:45:05 crc kubenswrapper[4735]: I0128 09:45:05.431844 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 09:45:05 crc kubenswrapper[4735]: I0128 09:45:05.432425 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 09:45:08 crc kubenswrapper[4735]: I0128 09:45:08.579525 4735 patch_prober.go:28] interesting pod/downloads-7954f5f757-8xlx7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 28 09:45:08 crc kubenswrapper[4735]: I0128 09:45:08.580103 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8xlx7" podUID="9c6798d5-2a74-4a6f-95dc-184b82db4a25" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 28 09:45:08 crc kubenswrapper[4735]: I0128 09:45:08.581035 4735 patch_prober.go:28] interesting pod/downloads-7954f5f757-8xlx7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 28 09:45:08 crc kubenswrapper[4735]: I0128 09:45:08.581135 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8xlx7" podUID="9c6798d5-2a74-4a6f-95dc-184b82db4a25" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 28 09:45:09 crc kubenswrapper[4735]: I0128 09:45:09.205163 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:45:09 crc kubenswrapper[4735]: I0128 09:45:09.209015 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:45:15 crc kubenswrapper[4735]: I0128 09:45:15.063106 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs\") pod \"network-metrics-daemon-zcskf\" (UID: \"06fddb5b-5b1e-40de-9b21-e28092521208\") " pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:45:15 crc kubenswrapper[4735]: I0128 09:45:15.070672 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06fddb5b-5b1e-40de-9b21-e28092521208-metrics-certs\") pod \"network-metrics-daemon-zcskf\" (UID: \"06fddb5b-5b1e-40de-9b21-e28092521208\") " pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:45:15 crc kubenswrapper[4735]: I0128 09:45:15.129352 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zcskf" Jan 28 09:45:18 crc kubenswrapper[4735]: I0128 09:45:18.599652 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-8xlx7" Jan 28 09:45:19 crc kubenswrapper[4735]: I0128 09:45:19.126368 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.222893 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp" Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.236954 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.261774 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-config-volume\") pod \"83849342-c2fb-4a4e-a7cb-c94dc2c02b77\" (UID: \"83849342-c2fb-4a4e-a7cb-c94dc2c02b77\") " Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.261828 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31b1a123-567a-45d6-8e90-d7dd406d545b-kubelet-dir\") pod \"31b1a123-567a-45d6-8e90-d7dd406d545b\" (UID: \"31b1a123-567a-45d6-8e90-d7dd406d545b\") " Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.261936 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-secret-volume\") pod \"83849342-c2fb-4a4e-a7cb-c94dc2c02b77\" (UID: \"83849342-c2fb-4a4e-a7cb-c94dc2c02b77\") " Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.261965 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31b1a123-567a-45d6-8e90-d7dd406d545b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "31b1a123-567a-45d6-8e90-d7dd406d545b" (UID: "31b1a123-567a-45d6-8e90-d7dd406d545b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.262008 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff4ml\" (UniqueName: \"kubernetes.io/projected/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-kube-api-access-ff4ml\") pod \"83849342-c2fb-4a4e-a7cb-c94dc2c02b77\" (UID: \"83849342-c2fb-4a4e-a7cb-c94dc2c02b77\") " Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.262088 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31b1a123-567a-45d6-8e90-d7dd406d545b-kube-api-access\") pod \"31b1a123-567a-45d6-8e90-d7dd406d545b\" (UID: \"31b1a123-567a-45d6-8e90-d7dd406d545b\") " Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.262459 4735 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31b1a123-567a-45d6-8e90-d7dd406d545b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.263953 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-config-volume" (OuterVolumeSpecName: "config-volume") pod "83849342-c2fb-4a4e-a7cb-c94dc2c02b77" (UID: "83849342-c2fb-4a4e-a7cb-c94dc2c02b77"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.275364 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-kube-api-access-ff4ml" (OuterVolumeSpecName: "kube-api-access-ff4ml") pod "83849342-c2fb-4a4e-a7cb-c94dc2c02b77" (UID: "83849342-c2fb-4a4e-a7cb-c94dc2c02b77"). InnerVolumeSpecName "kube-api-access-ff4ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.281064 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "83849342-c2fb-4a4e-a7cb-c94dc2c02b77" (UID: "83849342-c2fb-4a4e-a7cb-c94dc2c02b77"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.281258 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31b1a123-567a-45d6-8e90-d7dd406d545b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "31b1a123-567a-45d6-8e90-d7dd406d545b" (UID: "31b1a123-567a-45d6-8e90-d7dd406d545b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.364926 4735 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.364964 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff4ml\" (UniqueName: \"kubernetes.io/projected/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-kube-api-access-ff4ml\") on node \"crc\" DevicePath \"\"" Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.364974 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31b1a123-567a-45d6-8e90-d7dd406d545b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.364984 4735 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83849342-c2fb-4a4e-a7cb-c94dc2c02b77-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.561204 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp" event={"ID":"83849342-c2fb-4a4e-a7cb-c94dc2c02b77","Type":"ContainerDied","Data":"98592127dffc2c12edf31ada66c148141fd57f8c41666fc73ccad26b03012180"} Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.561297 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98592127dffc2c12edf31ada66c148141fd57f8c41666fc73ccad26b03012180" Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.561253 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp" Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.563900 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"31b1a123-567a-45d6-8e90-d7dd406d545b","Type":"ContainerDied","Data":"527b081ddef60d20eccba5817b40ef72eb0f2c1b81edbfc9163bd00e17e63a0f"} Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.563961 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="527b081ddef60d20eccba5817b40ef72eb0f2c1b81edbfc9163bd00e17e63a0f" Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.564004 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 09:45:20 crc kubenswrapper[4735]: I0128 09:45:20.600642 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 09:45:29 crc kubenswrapper[4735]: I0128 09:45:29.334713 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx4jv" Jan 28 09:45:30 crc kubenswrapper[4735]: I0128 09:45:30.615470 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"35b5ea39-714a-4d8b-97c0-64bc375fcd3e","Type":"ContainerStarted","Data":"5337192ee6f3b3b947b9232df8e91696f07585e87773590e969769b79ba6bb0f"} Jan 28 09:45:34 crc kubenswrapper[4735]: I0128 09:45:34.773170 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 09:45:34 crc kubenswrapper[4735]: E0128 09:45:34.905322 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 09:45:34 crc kubenswrapper[4735]: E0128 09:45:34.905662 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lzxk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4z4vz_openshift-marketplace(707fe5c2-87d6-4636-9270-b3295925cefd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 09:45:34 crc kubenswrapper[4735]: E0128 09:45:34.906884 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4z4vz" podUID="707fe5c2-87d6-4636-9270-b3295925cefd" Jan 28 09:45:34 crc kubenswrapper[4735]: E0128 09:45:34.926697 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 09:45:34 crc kubenswrapper[4735]: E0128 09:45:34.926882 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgwn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-skpv7_openshift-marketplace(777b99db-3c0c-444d-ad0c-60e58359f6ad): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 09:45:34 crc kubenswrapper[4735]: E0128 09:45:34.928128 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-skpv7" podUID="777b99db-3c0c-444d-ad0c-60e58359f6ad" Jan 28 09:45:35 crc kubenswrapper[4735]: I0128 09:45:35.430500 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 09:45:35 crc kubenswrapper[4735]: I0128 09:45:35.430607 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 09:45:36 crc kubenswrapper[4735]: E0128 09:45:36.132578 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-skpv7" podUID="777b99db-3c0c-444d-ad0c-60e58359f6ad" Jan 28 09:45:36 crc kubenswrapper[4735]: E0128 09:45:36.132609 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4z4vz" podUID="707fe5c2-87d6-4636-9270-b3295925cefd" Jan 28 09:45:36 crc kubenswrapper[4735]: E0128 09:45:36.201945 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 09:45:36 crc kubenswrapper[4735]: E0128 09:45:36.202420 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kzl24_openshift-marketplace(b7b59344-a72d-41c4-a8af-405990047117): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 09:45:36 crc kubenswrapper[4735]: E0128 09:45:36.203658 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kzl24" podUID="b7b59344-a72d-41c4-a8af-405990047117" Jan 28 09:45:36 crc kubenswrapper[4735]: E0128 09:45:36.258129 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 09:45:36 crc kubenswrapper[4735]: E0128 09:45:36.258280 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9df7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5zwwh_openshift-marketplace(ffa60863-d388-433c-aa18-a2e7c4078c9a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 09:45:36 crc kubenswrapper[4735]: E0128 09:45:36.259601 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-5zwwh" podUID="ffa60863-d388-433c-aa18-a2e7c4078c9a" Jan 28 09:45:38 crc kubenswrapper[4735]: I0128 09:45:38.455671 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 09:45:38 crc kubenswrapper[4735]: E0128 09:45:38.457129 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b1a123-567a-45d6-8e90-d7dd406d545b" containerName="pruner" Jan 28 09:45:38 crc kubenswrapper[4735]: I0128 09:45:38.457155 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b1a123-567a-45d6-8e90-d7dd406d545b" containerName="pruner" Jan 28 09:45:38 crc kubenswrapper[4735]: E0128 09:45:38.457192 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83849342-c2fb-4a4e-a7cb-c94dc2c02b77" containerName="collect-profiles" Jan 28 09:45:38 crc kubenswrapper[4735]: I0128 09:45:38.457204 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="83849342-c2fb-4a4e-a7cb-c94dc2c02b77" containerName="collect-profiles" Jan 28 09:45:38 crc kubenswrapper[4735]: I0128 09:45:38.457404 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="83849342-c2fb-4a4e-a7cb-c94dc2c02b77" containerName="collect-profiles" Jan 28 09:45:38 crc kubenswrapper[4735]: I0128 09:45:38.457420 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="31b1a123-567a-45d6-8e90-d7dd406d545b" containerName="pruner" Jan 28 09:45:38 crc kubenswrapper[4735]: I0128 09:45:38.458185 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 09:45:38 crc kubenswrapper[4735]: I0128 09:45:38.463452 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 09:45:38 crc kubenswrapper[4735]: I0128 09:45:38.487705 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68ad2ee3-f4a2-494f-b37e-c05771978ea7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"68ad2ee3-f4a2-494f-b37e-c05771978ea7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 09:45:38 crc kubenswrapper[4735]: I0128 09:45:38.487873 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68ad2ee3-f4a2-494f-b37e-c05771978ea7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"68ad2ee3-f4a2-494f-b37e-c05771978ea7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 09:45:38 crc kubenswrapper[4735]: I0128 09:45:38.588921 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68ad2ee3-f4a2-494f-b37e-c05771978ea7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"68ad2ee3-f4a2-494f-b37e-c05771978ea7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 09:45:38 crc kubenswrapper[4735]: I0128 09:45:38.589047 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68ad2ee3-f4a2-494f-b37e-c05771978ea7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"68ad2ee3-f4a2-494f-b37e-c05771978ea7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 09:45:38 crc kubenswrapper[4735]: I0128 09:45:38.589157 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68ad2ee3-f4a2-494f-b37e-c05771978ea7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"68ad2ee3-f4a2-494f-b37e-c05771978ea7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 09:45:38 crc kubenswrapper[4735]: I0128 09:45:38.612708 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68ad2ee3-f4a2-494f-b37e-c05771978ea7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"68ad2ee3-f4a2-494f-b37e-c05771978ea7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 09:45:38 crc kubenswrapper[4735]: I0128 09:45:38.801236 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 09:45:39 crc kubenswrapper[4735]: E0128 09:45:39.806592 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kzl24" podUID="b7b59344-a72d-41c4-a8af-405990047117" Jan 28 09:45:39 crc kubenswrapper[4735]: E0128 09:45:39.806605 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-5zwwh" podUID="ffa60863-d388-433c-aa18-a2e7c4078c9a" Jan 28 09:45:39 crc kubenswrapper[4735]: E0128 09:45:39.885865 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 09:45:39 crc kubenswrapper[4735]: E0128 09:45:39.886051 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mks7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6ckmr_openshift-marketplace(297a2cc5-ec84-4fd0-9bf1-25ccc5062045): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 09:45:39 crc kubenswrapper[4735]: E0128 09:45:39.887205 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-6ckmr" podUID="297a2cc5-ec84-4fd0-9bf1-25ccc5062045" Jan 28 09:45:41 crc kubenswrapper[4735]: E0128 09:45:41.687376 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6ckmr" podUID="297a2cc5-ec84-4fd0-9bf1-25ccc5062045" Jan 28 09:45:41 crc kubenswrapper[4735]: E0128 09:45:41.759568 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 09:45:41 crc kubenswrapper[4735]: E0128 09:45:41.760029 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h9f4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lcz6p_openshift-marketplace(b4ba59eb-a71b-467f-a035-9824e10a7ffe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 09:45:41 crc kubenswrapper[4735]: E0128 09:45:41.761237 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-lcz6p" podUID="b4ba59eb-a71b-467f-a035-9824e10a7ffe" Jan 28 09:45:41 crc kubenswrapper[4735]: E0128 09:45:41.803032 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 09:45:41 crc kubenswrapper[4735]: E0128 09:45:41.803270 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwmms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rrpjd_openshift-marketplace(5deece60-d2e0-46c9-9c66-d70a01722ba6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 09:45:41 crc kubenswrapper[4735]: E0128 09:45:41.807440 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rrpjd" podUID="5deece60-d2e0-46c9-9c66-d70a01722ba6" Jan 28 09:45:41 crc kubenswrapper[4735]: E0128 09:45:41.822059 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 09:45:41 crc kubenswrapper[4735]: E0128 09:45:41.822215 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gncvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xzcx4_openshift-marketplace(a150b50e-b4b0-40a5-8947-aabedfc6fa74): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 09:45:41 crc kubenswrapper[4735]: E0128 09:45:41.825074 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xzcx4" podUID="a150b50e-b4b0-40a5-8947-aabedfc6fa74" Jan 28 09:45:42 crc kubenswrapper[4735]: I0128 09:45:42.108757 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-zcskf"] Jan 28 09:45:42 crc kubenswrapper[4735]: W0128 09:45:42.111882 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06fddb5b_5b1e_40de_9b21_e28092521208.slice/crio-006f639e0bf60cc6f24bac32be1339723e85b85594cfb35d4d4531f199ab49cf WatchSource:0}: Error finding container 006f639e0bf60cc6f24bac32be1339723e85b85594cfb35d4d4531f199ab49cf: Status 404 returned error can't find the container with id 006f639e0bf60cc6f24bac32be1339723e85b85594cfb35d4d4531f199ab49cf Jan 28 09:45:42 crc kubenswrapper[4735]: I0128 09:45:42.213640 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 09:45:42 crc kubenswrapper[4735]: I0128 09:45:42.683835 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"35b5ea39-714a-4d8b-97c0-64bc375fcd3e","Type":"ContainerStarted","Data":"760d924348f7308e8b023ed071db4fc212bca7bab7758cc3941eb14e48a74445"} Jan 28 09:45:42 crc kubenswrapper[4735]: I0128 09:45:42.685138 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"68ad2ee3-f4a2-494f-b37e-c05771978ea7","Type":"ContainerStarted","Data":"f35c71423538d2776d23c508557c997a4c2c5b32ad1472d8d55026ab1b387105"} Jan 28 09:45:42 crc kubenswrapper[4735]: I0128 09:45:42.685309 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"68ad2ee3-f4a2-494f-b37e-c05771978ea7","Type":"ContainerStarted","Data":"9eb99cab3d08f0c3bc87b6be8375e2d16bdcdfead1099e519486f05490a11003"} Jan 28 09:45:42 crc kubenswrapper[4735]: I0128 09:45:42.686704 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zcskf" event={"ID":"06fddb5b-5b1e-40de-9b21-e28092521208","Type":"ContainerStarted","Data":"cf4595d261e17c6630a9caeda3b7909ad212199614796d077fd3fbf13bcbd38a"} Jan 28 09:45:42 crc kubenswrapper[4735]: I0128 09:45:42.686788 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zcskf" event={"ID":"06fddb5b-5b1e-40de-9b21-e28092521208","Type":"ContainerStarted","Data":"3e4024c44555d36df2c49bbf90c1ec52e6e7f9d0d72cff1ebc57a28f204f87ad"} Jan 28 09:45:42 crc kubenswrapper[4735]: I0128 09:45:42.686804 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zcskf" event={"ID":"06fddb5b-5b1e-40de-9b21-e28092521208","Type":"ContainerStarted","Data":"006f639e0bf60cc6f24bac32be1339723e85b85594cfb35d4d4531f199ab49cf"} Jan 28 09:45:42 crc kubenswrapper[4735]: E0128 09:45:42.687503 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xzcx4" podUID="a150b50e-b4b0-40a5-8947-aabedfc6fa74" Jan 28 09:45:42 crc kubenswrapper[4735]: E0128 09:45:42.688147 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lcz6p" podUID="b4ba59eb-a71b-467f-a035-9824e10a7ffe" Jan 28 09:45:42 crc kubenswrapper[4735]: E0128 09:45:42.688196 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rrpjd" podUID="5deece60-d2e0-46c9-9c66-d70a01722ba6" Jan 28 09:45:42 crc kubenswrapper[4735]: I0128 09:45:42.724527 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-zcskf" podStartSLOduration=170.724497757 podStartE2EDuration="2m50.724497757s" podCreationTimestamp="2026-01-28 09:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:45:42.72225283 +0000 UTC m=+195.871917203" watchObservedRunningTime="2026-01-28 09:45:42.724497757 +0000 UTC m=+195.874162130" Jan 28 09:45:42 crc kubenswrapper[4735]: I0128 09:45:42.729226 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=39.72920773 podStartE2EDuration="39.72920773s" podCreationTimestamp="2026-01-28 09:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:45:42.707106436 +0000 UTC m=+195.856770819" watchObservedRunningTime="2026-01-28 09:45:42.72920773 +0000 UTC m=+195.878872103" Jan 28 09:45:42 crc kubenswrapper[4735]: I0128 09:45:42.760848 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=4.76082872 podStartE2EDuration="4.76082872s" podCreationTimestamp="2026-01-28 09:45:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:45:42.758007867 +0000 UTC m=+195.907672250" watchObservedRunningTime="2026-01-28 09:45:42.76082872 +0000 UTC m=+195.910493093" Jan 28 09:45:43 crc kubenswrapper[4735]: I0128 09:45:43.695723 4735 generic.go:334] "Generic (PLEG): container finished" podID="35b5ea39-714a-4d8b-97c0-64bc375fcd3e" containerID="760d924348f7308e8b023ed071db4fc212bca7bab7758cc3941eb14e48a74445" exitCode=0 Jan 28 09:45:43 crc kubenswrapper[4735]: I0128 09:45:43.695886 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"35b5ea39-714a-4d8b-97c0-64bc375fcd3e","Type":"ContainerDied","Data":"760d924348f7308e8b023ed071db4fc212bca7bab7758cc3941eb14e48a74445"} Jan 28 09:45:43 crc kubenswrapper[4735]: I0128 09:45:43.699049 4735 generic.go:334] "Generic (PLEG): container finished" podID="68ad2ee3-f4a2-494f-b37e-c05771978ea7" containerID="f35c71423538d2776d23c508557c997a4c2c5b32ad1472d8d55026ab1b387105" exitCode=0 Jan 28 09:45:43 crc kubenswrapper[4735]: I0128 09:45:43.699196 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"68ad2ee3-f4a2-494f-b37e-c05771978ea7","Type":"ContainerDied","Data":"f35c71423538d2776d23c508557c997a4c2c5b32ad1472d8d55026ab1b387105"} Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.024649 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.030190 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.051059 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 09:45:45 crc kubenswrapper[4735]: E0128 09:45:45.051316 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b5ea39-714a-4d8b-97c0-64bc375fcd3e" containerName="pruner" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.051333 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b5ea39-714a-4d8b-97c0-64bc375fcd3e" containerName="pruner" Jan 28 09:45:45 crc kubenswrapper[4735]: E0128 09:45:45.051342 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68ad2ee3-f4a2-494f-b37e-c05771978ea7" containerName="pruner" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.051348 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="68ad2ee3-f4a2-494f-b37e-c05771978ea7" containerName="pruner" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.051471 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="68ad2ee3-f4a2-494f-b37e-c05771978ea7" containerName="pruner" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.051503 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="35b5ea39-714a-4d8b-97c0-64bc375fcd3e" containerName="pruner" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.051968 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.055962 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.082216 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35b5ea39-714a-4d8b-97c0-64bc375fcd3e-kubelet-dir\") pod \"35b5ea39-714a-4d8b-97c0-64bc375fcd3e\" (UID: \"35b5ea39-714a-4d8b-97c0-64bc375fcd3e\") " Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.082301 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35b5ea39-714a-4d8b-97c0-64bc375fcd3e-kube-api-access\") pod \"35b5ea39-714a-4d8b-97c0-64bc375fcd3e\" (UID: \"35b5ea39-714a-4d8b-97c0-64bc375fcd3e\") " Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.082350 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35b5ea39-714a-4d8b-97c0-64bc375fcd3e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "35b5ea39-714a-4d8b-97c0-64bc375fcd3e" (UID: "35b5ea39-714a-4d8b-97c0-64bc375fcd3e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.082452 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68ad2ee3-f4a2-494f-b37e-c05771978ea7-kubelet-dir\") pod \"68ad2ee3-f4a2-494f-b37e-c05771978ea7\" (UID: \"68ad2ee3-f4a2-494f-b37e-c05771978ea7\") " Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.082488 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68ad2ee3-f4a2-494f-b37e-c05771978ea7-kube-api-access\") pod \"68ad2ee3-f4a2-494f-b37e-c05771978ea7\" (UID: \"68ad2ee3-f4a2-494f-b37e-c05771978ea7\") " Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.082684 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36a77118-849f-4578-96fb-970de9c7128f-kube-api-access\") pod \"installer-9-crc\" (UID: \"36a77118-849f-4578-96fb-970de9c7128f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.082733 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36a77118-849f-4578-96fb-970de9c7128f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"36a77118-849f-4578-96fb-970de9c7128f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.082773 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36a77118-849f-4578-96fb-970de9c7128f-var-lock\") pod \"installer-9-crc\" (UID: \"36a77118-849f-4578-96fb-970de9c7128f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.082853 4735 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35b5ea39-714a-4d8b-97c0-64bc375fcd3e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.082924 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68ad2ee3-f4a2-494f-b37e-c05771978ea7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "68ad2ee3-f4a2-494f-b37e-c05771978ea7" (UID: "68ad2ee3-f4a2-494f-b37e-c05771978ea7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.089263 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35b5ea39-714a-4d8b-97c0-64bc375fcd3e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "35b5ea39-714a-4d8b-97c0-64bc375fcd3e" (UID: "35b5ea39-714a-4d8b-97c0-64bc375fcd3e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.089806 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68ad2ee3-f4a2-494f-b37e-c05771978ea7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "68ad2ee3-f4a2-494f-b37e-c05771978ea7" (UID: "68ad2ee3-f4a2-494f-b37e-c05771978ea7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.184090 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36a77118-849f-4578-96fb-970de9c7128f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"36a77118-849f-4578-96fb-970de9c7128f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.184175 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36a77118-849f-4578-96fb-970de9c7128f-var-lock\") pod \"installer-9-crc\" (UID: \"36a77118-849f-4578-96fb-970de9c7128f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.184261 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36a77118-849f-4578-96fb-970de9c7128f-kube-api-access\") pod \"installer-9-crc\" (UID: \"36a77118-849f-4578-96fb-970de9c7128f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.184310 4735 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68ad2ee3-f4a2-494f-b37e-c05771978ea7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.184326 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68ad2ee3-f4a2-494f-b37e-c05771978ea7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.184339 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35b5ea39-714a-4d8b-97c0-64bc375fcd3e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.184336 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36a77118-849f-4578-96fb-970de9c7128f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"36a77118-849f-4578-96fb-970de9c7128f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.184404 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36a77118-849f-4578-96fb-970de9c7128f-var-lock\") pod \"installer-9-crc\" (UID: \"36a77118-849f-4578-96fb-970de9c7128f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.202840 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36a77118-849f-4578-96fb-970de9c7128f-kube-api-access\") pod \"installer-9-crc\" (UID: \"36a77118-849f-4578-96fb-970de9c7128f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.374684 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.572727 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.711438 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"35b5ea39-714a-4d8b-97c0-64bc375fcd3e","Type":"ContainerDied","Data":"5337192ee6f3b3b947b9232df8e91696f07585e87773590e969769b79ba6bb0f"} Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.711486 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5337192ee6f3b3b947b9232df8e91696f07585e87773590e969769b79ba6bb0f" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.711559 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.720151 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"68ad2ee3-f4a2-494f-b37e-c05771978ea7","Type":"ContainerDied","Data":"9eb99cab3d08f0c3bc87b6be8375e2d16bdcdfead1099e519486f05490a11003"} Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.720205 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eb99cab3d08f0c3bc87b6be8375e2d16bdcdfead1099e519486f05490a11003" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.720272 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 09:45:45 crc kubenswrapper[4735]: I0128 09:45:45.722203 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"36a77118-849f-4578-96fb-970de9c7128f","Type":"ContainerStarted","Data":"9bef68f88e9a3111db833c5d49568efbd21962d70a024b69786eed06e804e6d7"} Jan 28 09:45:46 crc kubenswrapper[4735]: I0128 09:45:46.728403 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"36a77118-849f-4578-96fb-970de9c7128f","Type":"ContainerStarted","Data":"cdbd6e2e0f8ee46da466e931deb320ab86d0db5c23250ef1b032ed1cde671784"} Jan 28 09:45:46 crc kubenswrapper[4735]: I0128 09:45:46.746682 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.746662317 podStartE2EDuration="1.746662317s" podCreationTimestamp="2026-01-28 09:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:45:46.743260059 +0000 UTC m=+199.892924432" watchObservedRunningTime="2026-01-28 09:45:46.746662317 +0000 UTC m=+199.896326690" Jan 28 09:45:49 crc kubenswrapper[4735]: I0128 09:45:49.746872 4735 generic.go:334] "Generic (PLEG): container finished" podID="707fe5c2-87d6-4636-9270-b3295925cefd" containerID="5cd6c4367ff17e2ce87e0f91ffdddcefbeb90e0c1ddb94e4c7bfdd732eb4cdec" exitCode=0 Jan 28 09:45:49 crc kubenswrapper[4735]: I0128 09:45:49.747028 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z4vz" event={"ID":"707fe5c2-87d6-4636-9270-b3295925cefd","Type":"ContainerDied","Data":"5cd6c4367ff17e2ce87e0f91ffdddcefbeb90e0c1ddb94e4c7bfdd732eb4cdec"} Jan 28 09:45:51 crc kubenswrapper[4735]: I0128 09:45:51.773412 4735 generic.go:334] "Generic (PLEG): container finished" podID="777b99db-3c0c-444d-ad0c-60e58359f6ad" containerID="97e5a84f20caf366fbc3e385f5943b91ed9c29148c85e6f49866730b80b586a4" exitCode=0 Jan 28 09:45:51 crc kubenswrapper[4735]: I0128 09:45:51.773496 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skpv7" event={"ID":"777b99db-3c0c-444d-ad0c-60e58359f6ad","Type":"ContainerDied","Data":"97e5a84f20caf366fbc3e385f5943b91ed9c29148c85e6f49866730b80b586a4"} Jan 28 09:45:51 crc kubenswrapper[4735]: I0128 09:45:51.778291 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z4vz" event={"ID":"707fe5c2-87d6-4636-9270-b3295925cefd","Type":"ContainerStarted","Data":"31910b6136de3f52b1eafa990902d962b14e3031697893264d2eac51fa437bce"} Jan 28 09:45:51 crc kubenswrapper[4735]: I0128 09:45:51.806061 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4z4vz" podStartSLOduration=5.101304929 podStartE2EDuration="57.806037777s" podCreationTimestamp="2026-01-28 09:44:54 +0000 UTC" firstStartedPulling="2026-01-28 09:44:57.972054181 +0000 UTC m=+151.121718554" lastFinishedPulling="2026-01-28 09:45:50.676787039 +0000 UTC m=+203.826451402" observedRunningTime="2026-01-28 09:45:51.805155874 +0000 UTC m=+204.954820247" watchObservedRunningTime="2026-01-28 09:45:51.806037777 +0000 UTC m=+204.955702150" Jan 28 09:45:52 crc kubenswrapper[4735]: I0128 09:45:52.785498 4735 generic.go:334] "Generic (PLEG): container finished" podID="ffa60863-d388-433c-aa18-a2e7c4078c9a" containerID="7d0fcf2cf61e88c544568aa2b3ac6e643711c292c24bfb003f130ec33f25253b" exitCode=0 Jan 28 09:45:52 crc kubenswrapper[4735]: I0128 09:45:52.785563 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zwwh" event={"ID":"ffa60863-d388-433c-aa18-a2e7c4078c9a","Type":"ContainerDied","Data":"7d0fcf2cf61e88c544568aa2b3ac6e643711c292c24bfb003f130ec33f25253b"} Jan 28 09:45:52 crc kubenswrapper[4735]: I0128 09:45:52.788004 4735 generic.go:334] "Generic (PLEG): container finished" podID="b7b59344-a72d-41c4-a8af-405990047117" containerID="7faa76d792f955a5031737e123a49e5da4db8d25d64cac27744f3610937dc6dc" exitCode=0 Jan 28 09:45:52 crc kubenswrapper[4735]: I0128 09:45:52.788095 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzl24" event={"ID":"b7b59344-a72d-41c4-a8af-405990047117","Type":"ContainerDied","Data":"7faa76d792f955a5031737e123a49e5da4db8d25d64cac27744f3610937dc6dc"} Jan 28 09:45:52 crc kubenswrapper[4735]: I0128 09:45:52.797935 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skpv7" event={"ID":"777b99db-3c0c-444d-ad0c-60e58359f6ad","Type":"ContainerStarted","Data":"da2c3df07dbcca5b4bd1a8cb150c190ba4dc773d44f6218d54b23277c241b6de"} Jan 28 09:45:52 crc kubenswrapper[4735]: I0128 09:45:52.829600 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-skpv7" podStartSLOduration=4.687761157 podStartE2EDuration="58.829573639s" podCreationTimestamp="2026-01-28 09:44:54 +0000 UTC" firstStartedPulling="2026-01-28 09:44:58.068495663 +0000 UTC m=+151.218160036" lastFinishedPulling="2026-01-28 09:45:52.210308145 +0000 UTC m=+205.359972518" observedRunningTime="2026-01-28 09:45:52.826422877 +0000 UTC m=+205.976087270" watchObservedRunningTime="2026-01-28 09:45:52.829573639 +0000 UTC m=+205.979238012" Jan 28 09:45:53 crc kubenswrapper[4735]: I0128 09:45:53.804332 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzl24" event={"ID":"b7b59344-a72d-41c4-a8af-405990047117","Type":"ContainerStarted","Data":"0b7cea084b689e81ee7e74cd4e5d5b790ec12189cb8e0b35e64e9c45bc78c207"} Jan 28 09:45:53 crc kubenswrapper[4735]: I0128 09:45:53.806812 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zwwh" event={"ID":"ffa60863-d388-433c-aa18-a2e7c4078c9a","Type":"ContainerStarted","Data":"bd463e548732ad31c71ae8cd2a359e9998e4606e280f334d6f9e5ac4707e7196"} Jan 28 09:45:53 crc kubenswrapper[4735]: I0128 09:45:53.822848 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kzl24" podStartSLOduration=3.807192809 podStartE2EDuration="57.822828402s" podCreationTimestamp="2026-01-28 09:44:56 +0000 UTC" firstStartedPulling="2026-01-28 09:44:59.149998097 +0000 UTC m=+152.299662470" lastFinishedPulling="2026-01-28 09:45:53.16563369 +0000 UTC m=+206.315298063" observedRunningTime="2026-01-28 09:45:53.821488067 +0000 UTC m=+206.971152440" watchObservedRunningTime="2026-01-28 09:45:53.822828402 +0000 UTC m=+206.972492785" Jan 28 09:45:53 crc kubenswrapper[4735]: I0128 09:45:53.848236 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5zwwh" podStartSLOduration=2.701766548 podStartE2EDuration="56.848216803s" podCreationTimestamp="2026-01-28 09:44:57 +0000 UTC" firstStartedPulling="2026-01-28 09:44:59.166089094 +0000 UTC m=+152.315753467" lastFinishedPulling="2026-01-28 09:45:53.312539359 +0000 UTC m=+206.462203722" observedRunningTime="2026-01-28 09:45:53.846914799 +0000 UTC m=+206.996579182" watchObservedRunningTime="2026-01-28 09:45:53.848216803 +0000 UTC m=+206.997881166" Jan 28 09:45:54 crc kubenswrapper[4735]: I0128 09:45:54.814785 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzcx4" event={"ID":"a150b50e-b4b0-40a5-8947-aabedfc6fa74","Type":"ContainerStarted","Data":"30b5f8481b5eee47ab39131dcc3e4ea1cc6a89e5dbd3bc7c18573a8fae968680"} Jan 28 09:45:54 crc kubenswrapper[4735]: I0128 09:45:54.918384 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4z4vz" Jan 28 09:45:54 crc kubenswrapper[4735]: I0128 09:45:54.918465 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4z4vz" Jan 28 09:45:55 crc kubenswrapper[4735]: I0128 09:45:55.076254 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4z4vz" Jan 28 09:45:55 crc kubenswrapper[4735]: I0128 09:45:55.350806 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-skpv7" Jan 28 09:45:55 crc kubenswrapper[4735]: I0128 09:45:55.350885 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-skpv7" Jan 28 09:45:55 crc kubenswrapper[4735]: I0128 09:45:55.395466 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-skpv7" Jan 28 09:45:55 crc kubenswrapper[4735]: I0128 09:45:55.822241 4735 generic.go:334] "Generic (PLEG): container finished" podID="a150b50e-b4b0-40a5-8947-aabedfc6fa74" containerID="30b5f8481b5eee47ab39131dcc3e4ea1cc6a89e5dbd3bc7c18573a8fae968680" exitCode=0 Jan 28 09:45:55 crc kubenswrapper[4735]: I0128 09:45:55.822341 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzcx4" event={"ID":"a150b50e-b4b0-40a5-8947-aabedfc6fa74","Type":"ContainerDied","Data":"30b5f8481b5eee47ab39131dcc3e4ea1cc6a89e5dbd3bc7c18573a8fae968680"} Jan 28 09:45:55 crc kubenswrapper[4735]: I0128 09:45:55.827671 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ckmr" event={"ID":"297a2cc5-ec84-4fd0-9bf1-25ccc5062045","Type":"ContainerStarted","Data":"463e251f350349fe8530a13fd0a10f7b87fb292c135b2c88bc2bcca7bc8ccb48"} Jan 28 09:45:55 crc kubenswrapper[4735]: I0128 09:45:55.874591 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4z4vz" Jan 28 09:45:56 crc kubenswrapper[4735]: I0128 09:45:56.837173 4735 generic.go:334] "Generic (PLEG): container finished" podID="297a2cc5-ec84-4fd0-9bf1-25ccc5062045" containerID="463e251f350349fe8530a13fd0a10f7b87fb292c135b2c88bc2bcca7bc8ccb48" exitCode=0 Jan 28 09:45:56 crc kubenswrapper[4735]: I0128 09:45:56.837908 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ckmr" event={"ID":"297a2cc5-ec84-4fd0-9bf1-25ccc5062045","Type":"ContainerDied","Data":"463e251f350349fe8530a13fd0a10f7b87fb292c135b2c88bc2bcca7bc8ccb48"} Jan 28 09:45:57 crc kubenswrapper[4735]: I0128 09:45:57.770255 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kzl24" Jan 28 09:45:57 crc kubenswrapper[4735]: I0128 09:45:57.770359 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kzl24" Jan 28 09:45:57 crc kubenswrapper[4735]: I0128 09:45:57.775593 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5zwwh" Jan 28 09:45:57 crc kubenswrapper[4735]: I0128 09:45:57.775649 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5zwwh" Jan 28 09:45:57 crc kubenswrapper[4735]: I0128 09:45:57.828176 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5zwwh" Jan 28 09:45:57 crc kubenswrapper[4735]: I0128 09:45:57.846325 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kzl24" Jan 28 09:45:58 crc kubenswrapper[4735]: I0128 09:45:58.902031 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5zwwh" Jan 28 09:45:59 crc kubenswrapper[4735]: I0128 09:45:59.857588 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzcx4" event={"ID":"a150b50e-b4b0-40a5-8947-aabedfc6fa74","Type":"ContainerStarted","Data":"638ee8b9dc8943481f5075031492a59b1ae532cb06a443af4c7283eb6aa11397"} Jan 28 09:45:59 crc kubenswrapper[4735]: I0128 09:45:59.863657 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ckmr" event={"ID":"297a2cc5-ec84-4fd0-9bf1-25ccc5062045","Type":"ContainerStarted","Data":"80eadcd670c298d4bb0c29e07867446849ccd009d39db6e90edf2baf3833f029"} Jan 28 09:45:59 crc kubenswrapper[4735]: I0128 09:45:59.867663 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrpjd" event={"ID":"5deece60-d2e0-46c9-9c66-d70a01722ba6","Type":"ContainerStarted","Data":"06f9ecb051d65091ce154a15981b909c66ec483bc927680d01107656bcca8dad"} Jan 28 09:45:59 crc kubenswrapper[4735]: I0128 09:45:59.880661 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xzcx4" podStartSLOduration=3.832033146 podStartE2EDuration="1m1.880641318s" podCreationTimestamp="2026-01-28 09:44:58 +0000 UTC" firstStartedPulling="2026-01-28 09:45:01.277695927 +0000 UTC m=+154.427360300" lastFinishedPulling="2026-01-28 09:45:59.326304089 +0000 UTC m=+212.475968472" observedRunningTime="2026-01-28 09:45:59.878903343 +0000 UTC m=+213.028567716" watchObservedRunningTime="2026-01-28 09:45:59.880641318 +0000 UTC m=+213.030305691" Jan 28 09:45:59 crc kubenswrapper[4735]: I0128 09:45:59.926415 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6ckmr" podStartSLOduration=3.701505369 podStartE2EDuration="1m1.926391051s" podCreationTimestamp="2026-01-28 09:44:58 +0000 UTC" firstStartedPulling="2026-01-28 09:45:01.235968674 +0000 UTC m=+154.385633047" lastFinishedPulling="2026-01-28 09:45:59.460854356 +0000 UTC m=+212.610518729" observedRunningTime="2026-01-28 09:45:59.918806824 +0000 UTC m=+213.068471197" watchObservedRunningTime="2026-01-28 09:45:59.926391051 +0000 UTC m=+213.076055424" Jan 28 09:46:00 crc kubenswrapper[4735]: I0128 09:46:00.873379 4735 generic.go:334] "Generic (PLEG): container finished" podID="b4ba59eb-a71b-467f-a035-9824e10a7ffe" containerID="5e6baa5b1a5ffc05eff7a8899a247dd4dfd6d1eb07395a64f3c126078d8d34ab" exitCode=0 Jan 28 09:46:00 crc kubenswrapper[4735]: I0128 09:46:00.873444 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcz6p" event={"ID":"b4ba59eb-a71b-467f-a035-9824e10a7ffe","Type":"ContainerDied","Data":"5e6baa5b1a5ffc05eff7a8899a247dd4dfd6d1eb07395a64f3c126078d8d34ab"} Jan 28 09:46:00 crc kubenswrapper[4735]: I0128 09:46:00.876505 4735 generic.go:334] "Generic (PLEG): container finished" podID="5deece60-d2e0-46c9-9c66-d70a01722ba6" containerID="06f9ecb051d65091ce154a15981b909c66ec483bc927680d01107656bcca8dad" exitCode=0 Jan 28 09:46:00 crc kubenswrapper[4735]: I0128 09:46:00.876536 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrpjd" event={"ID":"5deece60-d2e0-46c9-9c66-d70a01722ba6","Type":"ContainerDied","Data":"06f9ecb051d65091ce154a15981b909c66ec483bc927680d01107656bcca8dad"} Jan 28 09:46:02 crc kubenswrapper[4735]: I0128 09:46:02.536760 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zwwh"] Jan 28 09:46:02 crc kubenswrapper[4735]: I0128 09:46:02.537011 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5zwwh" podUID="ffa60863-d388-433c-aa18-a2e7c4078c9a" containerName="registry-server" containerID="cri-o://bd463e548732ad31c71ae8cd2a359e9998e4606e280f334d6f9e5ac4707e7196" gracePeriod=2 Jan 28 09:46:02 crc kubenswrapper[4735]: I0128 09:46:02.899901 4735 generic.go:334] "Generic (PLEG): container finished" podID="ffa60863-d388-433c-aa18-a2e7c4078c9a" containerID="bd463e548732ad31c71ae8cd2a359e9998e4606e280f334d6f9e5ac4707e7196" exitCode=0 Jan 28 09:46:02 crc kubenswrapper[4735]: I0128 09:46:02.900227 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zwwh" event={"ID":"ffa60863-d388-433c-aa18-a2e7c4078c9a","Type":"ContainerDied","Data":"bd463e548732ad31c71ae8cd2a359e9998e4606e280f334d6f9e5ac4707e7196"} Jan 28 09:46:03 crc kubenswrapper[4735]: I0128 09:46:03.096194 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zwwh" Jan 28 09:46:03 crc kubenswrapper[4735]: I0128 09:46:03.185376 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa60863-d388-433c-aa18-a2e7c4078c9a-catalog-content\") pod \"ffa60863-d388-433c-aa18-a2e7c4078c9a\" (UID: \"ffa60863-d388-433c-aa18-a2e7c4078c9a\") " Jan 28 09:46:03 crc kubenswrapper[4735]: I0128 09:46:03.185487 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9df7f\" (UniqueName: \"kubernetes.io/projected/ffa60863-d388-433c-aa18-a2e7c4078c9a-kube-api-access-9df7f\") pod \"ffa60863-d388-433c-aa18-a2e7c4078c9a\" (UID: \"ffa60863-d388-433c-aa18-a2e7c4078c9a\") " Jan 28 09:46:03 crc kubenswrapper[4735]: I0128 09:46:03.186040 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa60863-d388-433c-aa18-a2e7c4078c9a-utilities\") pod \"ffa60863-d388-433c-aa18-a2e7c4078c9a\" (UID: \"ffa60863-d388-433c-aa18-a2e7c4078c9a\") " Jan 28 09:46:03 crc kubenswrapper[4735]: I0128 09:46:03.186961 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffa60863-d388-433c-aa18-a2e7c4078c9a-utilities" (OuterVolumeSpecName: "utilities") pod "ffa60863-d388-433c-aa18-a2e7c4078c9a" (UID: "ffa60863-d388-433c-aa18-a2e7c4078c9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:46:03 crc kubenswrapper[4735]: I0128 09:46:03.199903 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffa60863-d388-433c-aa18-a2e7c4078c9a-kube-api-access-9df7f" (OuterVolumeSpecName: "kube-api-access-9df7f") pod "ffa60863-d388-433c-aa18-a2e7c4078c9a" (UID: "ffa60863-d388-433c-aa18-a2e7c4078c9a"). InnerVolumeSpecName "kube-api-access-9df7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:46:03 crc kubenswrapper[4735]: I0128 09:46:03.210675 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffa60863-d388-433c-aa18-a2e7c4078c9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffa60863-d388-433c-aa18-a2e7c4078c9a" (UID: "ffa60863-d388-433c-aa18-a2e7c4078c9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:46:03 crc kubenswrapper[4735]: I0128 09:46:03.287857 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa60863-d388-433c-aa18-a2e7c4078c9a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:03 crc kubenswrapper[4735]: I0128 09:46:03.287932 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9df7f\" (UniqueName: \"kubernetes.io/projected/ffa60863-d388-433c-aa18-a2e7c4078c9a-kube-api-access-9df7f\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:03 crc kubenswrapper[4735]: I0128 09:46:03.287952 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa60863-d388-433c-aa18-a2e7c4078c9a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:03 crc kubenswrapper[4735]: I0128 09:46:03.909861 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zwwh" Jan 28 09:46:03 crc kubenswrapper[4735]: I0128 09:46:03.909861 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zwwh" event={"ID":"ffa60863-d388-433c-aa18-a2e7c4078c9a","Type":"ContainerDied","Data":"581575e3772a9c3e7cf9ff2662d4cb36f5321de34608a5c350c00407157cc50b"} Jan 28 09:46:03 crc kubenswrapper[4735]: I0128 09:46:03.909960 4735 scope.go:117] "RemoveContainer" containerID="bd463e548732ad31c71ae8cd2a359e9998e4606e280f334d6f9e5ac4707e7196" Jan 28 09:46:03 crc kubenswrapper[4735]: I0128 09:46:03.913606 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcz6p" event={"ID":"b4ba59eb-a71b-467f-a035-9824e10a7ffe","Type":"ContainerStarted","Data":"83562bc2c0908a7791a47689f660f443cb82833b763025069e045f0cc1570632"} Jan 28 09:46:03 crc kubenswrapper[4735]: I0128 09:46:03.932403 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zwwh"] Jan 28 09:46:03 crc kubenswrapper[4735]: I0128 09:46:03.940801 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zwwh"] Jan 28 09:46:04 crc kubenswrapper[4735]: I0128 09:46:04.228169 4735 scope.go:117] "RemoveContainer" containerID="7d0fcf2cf61e88c544568aa2b3ac6e643711c292c24bfb003f130ec33f25253b" Jan 28 09:46:04 crc kubenswrapper[4735]: I0128 09:46:04.949814 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lcz6p" podStartSLOduration=5.210071203 podStartE2EDuration="1m9.949738982s" podCreationTimestamp="2026-01-28 09:44:55 +0000 UTC" firstStartedPulling="2026-01-28 09:44:58.143900041 +0000 UTC m=+151.293564414" lastFinishedPulling="2026-01-28 09:46:02.88356782 +0000 UTC m=+216.033232193" observedRunningTime="2026-01-28 09:46:04.946475567 +0000 UTC m=+218.096139940" watchObservedRunningTime="2026-01-28 09:46:04.949738982 +0000 UTC m=+218.099403365" Jan 28 09:46:05 crc kubenswrapper[4735]: I0128 09:46:05.163300 4735 scope.go:117] "RemoveContainer" containerID="27b7267ec88c74e2d5896b2047a8cf69e062916ca65e24f9c60dfbe87a969695" Jan 28 09:46:05 crc kubenswrapper[4735]: I0128 09:46:05.399332 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-skpv7" Jan 28 09:46:05 crc kubenswrapper[4735]: I0128 09:46:05.430121 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 09:46:05 crc kubenswrapper[4735]: I0128 09:46:05.430207 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 09:46:05 crc kubenswrapper[4735]: I0128 09:46:05.430271 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:46:05 crc kubenswrapper[4735]: I0128 09:46:05.431155 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259"} pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 09:46:05 crc kubenswrapper[4735]: I0128 09:46:05.431300 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" containerID="cri-o://d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259" gracePeriod=600 Jan 28 09:46:05 crc kubenswrapper[4735]: I0128 09:46:05.512783 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffa60863-d388-433c-aa18-a2e7c4078c9a" path="/var/lib/kubelet/pods/ffa60863-d388-433c-aa18-a2e7c4078c9a/volumes" Jan 28 09:46:05 crc kubenswrapper[4735]: I0128 09:46:05.530519 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lcz6p" Jan 28 09:46:05 crc kubenswrapper[4735]: I0128 09:46:05.530599 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lcz6p" Jan 28 09:46:05 crc kubenswrapper[4735]: I0128 09:46:05.591346 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lcz6p" Jan 28 09:46:06 crc kubenswrapper[4735]: I0128 09:46:06.942473 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerID="d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259" exitCode=0 Jan 28 09:46:06 crc kubenswrapper[4735]: I0128 09:46:06.942671 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerDied","Data":"d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259"} Jan 28 09:46:06 crc kubenswrapper[4735]: I0128 09:46:06.948078 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrpjd" event={"ID":"5deece60-d2e0-46c9-9c66-d70a01722ba6","Type":"ContainerStarted","Data":"167013ac36df76994a265daa4925e1cf67001a515bbf57ab8b4d87a714325361"} Jan 28 09:46:06 crc kubenswrapper[4735]: I0128 09:46:06.972685 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rrpjd" podStartSLOduration=4.855977902 podStartE2EDuration="1m11.972662866s" podCreationTimestamp="2026-01-28 09:44:55 +0000 UTC" firstStartedPulling="2026-01-28 09:44:58.046817011 +0000 UTC m=+151.196481384" lastFinishedPulling="2026-01-28 09:46:05.163501965 +0000 UTC m=+218.313166348" observedRunningTime="2026-01-28 09:46:06.970057568 +0000 UTC m=+220.119721941" watchObservedRunningTime="2026-01-28 09:46:06.972662866 +0000 UTC m=+220.122327239" Jan 28 09:46:07 crc kubenswrapper[4735]: I0128 09:46:07.138548 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-skpv7"] Jan 28 09:46:07 crc kubenswrapper[4735]: I0128 09:46:07.139378 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-skpv7" podUID="777b99db-3c0c-444d-ad0c-60e58359f6ad" containerName="registry-server" containerID="cri-o://da2c3df07dbcca5b4bd1a8cb150c190ba4dc773d44f6218d54b23277c241b6de" gracePeriod=2 Jan 28 09:46:07 crc kubenswrapper[4735]: I0128 09:46:07.814192 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kzl24" Jan 28 09:46:08 crc kubenswrapper[4735]: I0128 09:46:08.960776 4735 generic.go:334] "Generic (PLEG): container finished" podID="777b99db-3c0c-444d-ad0c-60e58359f6ad" containerID="da2c3df07dbcca5b4bd1a8cb150c190ba4dc773d44f6218d54b23277c241b6de" exitCode=0 Jan 28 09:46:08 crc kubenswrapper[4735]: I0128 09:46:08.960807 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skpv7" event={"ID":"777b99db-3c0c-444d-ad0c-60e58359f6ad","Type":"ContainerDied","Data":"da2c3df07dbcca5b4bd1a8cb150c190ba4dc773d44f6218d54b23277c241b6de"} Jan 28 09:46:08 crc kubenswrapper[4735]: I0128 09:46:08.963474 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"08c803f74dc0e01fd80ba376f98d82a56fa37d9898555cb46e404071923fd7de"} Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.213848 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-skpv7" Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.373274 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgwn7\" (UniqueName: \"kubernetes.io/projected/777b99db-3c0c-444d-ad0c-60e58359f6ad-kube-api-access-kgwn7\") pod \"777b99db-3c0c-444d-ad0c-60e58359f6ad\" (UID: \"777b99db-3c0c-444d-ad0c-60e58359f6ad\") " Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.374624 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/777b99db-3c0c-444d-ad0c-60e58359f6ad-utilities" (OuterVolumeSpecName: "utilities") pod "777b99db-3c0c-444d-ad0c-60e58359f6ad" (UID: "777b99db-3c0c-444d-ad0c-60e58359f6ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.374837 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777b99db-3c0c-444d-ad0c-60e58359f6ad-utilities\") pod \"777b99db-3c0c-444d-ad0c-60e58359f6ad\" (UID: \"777b99db-3c0c-444d-ad0c-60e58359f6ad\") " Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.374923 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777b99db-3c0c-444d-ad0c-60e58359f6ad-catalog-content\") pod \"777b99db-3c0c-444d-ad0c-60e58359f6ad\" (UID: \"777b99db-3c0c-444d-ad0c-60e58359f6ad\") " Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.375199 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777b99db-3c0c-444d-ad0c-60e58359f6ad-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.381768 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/777b99db-3c0c-444d-ad0c-60e58359f6ad-kube-api-access-kgwn7" (OuterVolumeSpecName: "kube-api-access-kgwn7") pod "777b99db-3c0c-444d-ad0c-60e58359f6ad" (UID: "777b99db-3c0c-444d-ad0c-60e58359f6ad"). InnerVolumeSpecName "kube-api-access-kgwn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.443159 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/777b99db-3c0c-444d-ad0c-60e58359f6ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "777b99db-3c0c-444d-ad0c-60e58359f6ad" (UID: "777b99db-3c0c-444d-ad0c-60e58359f6ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.475910 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgwn7\" (UniqueName: \"kubernetes.io/projected/777b99db-3c0c-444d-ad0c-60e58359f6ad-kube-api-access-kgwn7\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.475951 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777b99db-3c0c-444d-ad0c-60e58359f6ad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.481364 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6ckmr" Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.481395 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6ckmr" Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.534537 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6ckmr" Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.671240 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xzcx4" Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.671318 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xzcx4" Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.737991 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xzcx4" Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.972874 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skpv7" event={"ID":"777b99db-3c0c-444d-ad0c-60e58359f6ad","Type":"ContainerDied","Data":"43868c3ae11d62e09f8febfd2c4b48ae9cc6ca62a3a92e9aeb8ef88016590d62"} Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.972962 4735 scope.go:117] "RemoveContainer" containerID="da2c3df07dbcca5b4bd1a8cb150c190ba4dc773d44f6218d54b23277c241b6de" Jan 28 09:46:09 crc kubenswrapper[4735]: I0128 09:46:09.973137 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-skpv7" Jan 28 09:46:10 crc kubenswrapper[4735]: I0128 09:46:10.006152 4735 scope.go:117] "RemoveContainer" containerID="97e5a84f20caf366fbc3e385f5943b91ed9c29148c85e6f49866730b80b586a4" Jan 28 09:46:10 crc kubenswrapper[4735]: I0128 09:46:10.012018 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-skpv7"] Jan 28 09:46:10 crc kubenswrapper[4735]: I0128 09:46:10.015474 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-skpv7"] Jan 28 09:46:10 crc kubenswrapper[4735]: I0128 09:46:10.029278 4735 scope.go:117] "RemoveContainer" containerID="107ce3b3582f6a1a8c3b1149844427f8c3b5b2258f129c3f5b2fb2a3ac5eb94b" Jan 28 09:46:10 crc kubenswrapper[4735]: I0128 09:46:10.030336 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6ckmr" Jan 28 09:46:10 crc kubenswrapper[4735]: I0128 09:46:10.040919 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xzcx4" Jan 28 09:46:11 crc kubenswrapper[4735]: I0128 09:46:11.511546 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="777b99db-3c0c-444d-ad0c-60e58359f6ad" path="/var/lib/kubelet/pods/777b99db-3c0c-444d-ad0c-60e58359f6ad/volumes" Jan 28 09:46:13 crc kubenswrapper[4735]: I0128 09:46:13.538372 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xzcx4"] Jan 28 09:46:13 crc kubenswrapper[4735]: I0128 09:46:13.538680 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xzcx4" podUID="a150b50e-b4b0-40a5-8947-aabedfc6fa74" containerName="registry-server" containerID="cri-o://638ee8b9dc8943481f5075031492a59b1ae532cb06a443af4c7283eb6aa11397" gracePeriod=2 Jan 28 09:46:13 crc kubenswrapper[4735]: I0128 09:46:13.998400 4735 generic.go:334] "Generic (PLEG): container finished" podID="a150b50e-b4b0-40a5-8947-aabedfc6fa74" containerID="638ee8b9dc8943481f5075031492a59b1ae532cb06a443af4c7283eb6aa11397" exitCode=0 Jan 28 09:46:13 crc kubenswrapper[4735]: I0128 09:46:13.998452 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzcx4" event={"ID":"a150b50e-b4b0-40a5-8947-aabedfc6fa74","Type":"ContainerDied","Data":"638ee8b9dc8943481f5075031492a59b1ae532cb06a443af4c7283eb6aa11397"} Jan 28 09:46:14 crc kubenswrapper[4735]: I0128 09:46:14.052977 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xzcx4" Jan 28 09:46:14 crc kubenswrapper[4735]: I0128 09:46:14.141622 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a150b50e-b4b0-40a5-8947-aabedfc6fa74-utilities\") pod \"a150b50e-b4b0-40a5-8947-aabedfc6fa74\" (UID: \"a150b50e-b4b0-40a5-8947-aabedfc6fa74\") " Jan 28 09:46:14 crc kubenswrapper[4735]: I0128 09:46:14.141705 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gncvk\" (UniqueName: \"kubernetes.io/projected/a150b50e-b4b0-40a5-8947-aabedfc6fa74-kube-api-access-gncvk\") pod \"a150b50e-b4b0-40a5-8947-aabedfc6fa74\" (UID: \"a150b50e-b4b0-40a5-8947-aabedfc6fa74\") " Jan 28 09:46:14 crc kubenswrapper[4735]: I0128 09:46:14.141739 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a150b50e-b4b0-40a5-8947-aabedfc6fa74-catalog-content\") pod \"a150b50e-b4b0-40a5-8947-aabedfc6fa74\" (UID: \"a150b50e-b4b0-40a5-8947-aabedfc6fa74\") " Jan 28 09:46:14 crc kubenswrapper[4735]: I0128 09:46:14.145592 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a150b50e-b4b0-40a5-8947-aabedfc6fa74-utilities" (OuterVolumeSpecName: "utilities") pod "a150b50e-b4b0-40a5-8947-aabedfc6fa74" (UID: "a150b50e-b4b0-40a5-8947-aabedfc6fa74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:46:14 crc kubenswrapper[4735]: I0128 09:46:14.151380 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a150b50e-b4b0-40a5-8947-aabedfc6fa74-kube-api-access-gncvk" (OuterVolumeSpecName: "kube-api-access-gncvk") pod "a150b50e-b4b0-40a5-8947-aabedfc6fa74" (UID: "a150b50e-b4b0-40a5-8947-aabedfc6fa74"). InnerVolumeSpecName "kube-api-access-gncvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:46:14 crc kubenswrapper[4735]: I0128 09:46:14.244261 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a150b50e-b4b0-40a5-8947-aabedfc6fa74-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:14 crc kubenswrapper[4735]: I0128 09:46:14.244314 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gncvk\" (UniqueName: \"kubernetes.io/projected/a150b50e-b4b0-40a5-8947-aabedfc6fa74-kube-api-access-gncvk\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:14 crc kubenswrapper[4735]: I0128 09:46:14.295153 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a150b50e-b4b0-40a5-8947-aabedfc6fa74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a150b50e-b4b0-40a5-8947-aabedfc6fa74" (UID: "a150b50e-b4b0-40a5-8947-aabedfc6fa74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:46:14 crc kubenswrapper[4735]: I0128 09:46:14.346075 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a150b50e-b4b0-40a5-8947-aabedfc6fa74-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:15 crc kubenswrapper[4735]: I0128 09:46:15.006247 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzcx4" event={"ID":"a150b50e-b4b0-40a5-8947-aabedfc6fa74","Type":"ContainerDied","Data":"36795afd2f682d85c584560cf5322e95ac89f77b4d7dba0bafc6d004f9966ed2"} Jan 28 09:46:15 crc kubenswrapper[4735]: I0128 09:46:15.006306 4735 scope.go:117] "RemoveContainer" containerID="638ee8b9dc8943481f5075031492a59b1ae532cb06a443af4c7283eb6aa11397" Jan 28 09:46:15 crc kubenswrapper[4735]: I0128 09:46:15.006313 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xzcx4" Jan 28 09:46:15 crc kubenswrapper[4735]: I0128 09:46:15.024320 4735 scope.go:117] "RemoveContainer" containerID="30b5f8481b5eee47ab39131dcc3e4ea1cc6a89e5dbd3bc7c18573a8fae968680" Jan 28 09:46:15 crc kubenswrapper[4735]: I0128 09:46:15.040732 4735 scope.go:117] "RemoveContainer" containerID="ac589df831c31ff1a44b32403fc0631fd3d98acef218a1281c52e81609a55083" Jan 28 09:46:15 crc kubenswrapper[4735]: I0128 09:46:15.042646 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xzcx4"] Jan 28 09:46:15 crc kubenswrapper[4735]: I0128 09:46:15.045467 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xzcx4"] Jan 28 09:46:15 crc kubenswrapper[4735]: I0128 09:46:15.515017 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a150b50e-b4b0-40a5-8947-aabedfc6fa74" path="/var/lib/kubelet/pods/a150b50e-b4b0-40a5-8947-aabedfc6fa74/volumes" Jan 28 09:46:15 crc kubenswrapper[4735]: I0128 09:46:15.579167 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lcz6p" Jan 28 09:46:15 crc kubenswrapper[4735]: I0128 09:46:15.740089 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rrpjd" Jan 28 09:46:15 crc kubenswrapper[4735]: I0128 09:46:15.740182 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rrpjd" Jan 28 09:46:15 crc kubenswrapper[4735]: I0128 09:46:15.790666 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rrpjd" Jan 28 09:46:16 crc kubenswrapper[4735]: I0128 09:46:16.059124 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rrpjd" Jan 28 09:46:17 crc kubenswrapper[4735]: I0128 09:46:17.840927 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fhwqk"] Jan 28 09:46:17 crc kubenswrapper[4735]: I0128 09:46:17.938971 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rrpjd"] Jan 28 09:46:18 crc kubenswrapper[4735]: I0128 09:46:18.024459 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rrpjd" podUID="5deece60-d2e0-46c9-9c66-d70a01722ba6" containerName="registry-server" containerID="cri-o://167013ac36df76994a265daa4925e1cf67001a515bbf57ab8b4d87a714325361" gracePeriod=2 Jan 28 09:46:18 crc kubenswrapper[4735]: I0128 09:46:18.405765 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrpjd" Jan 28 09:46:18 crc kubenswrapper[4735]: I0128 09:46:18.504615 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5deece60-d2e0-46c9-9c66-d70a01722ba6-utilities\") pod \"5deece60-d2e0-46c9-9c66-d70a01722ba6\" (UID: \"5deece60-d2e0-46c9-9c66-d70a01722ba6\") " Jan 28 09:46:18 crc kubenswrapper[4735]: I0128 09:46:18.504683 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwmms\" (UniqueName: \"kubernetes.io/projected/5deece60-d2e0-46c9-9c66-d70a01722ba6-kube-api-access-bwmms\") pod \"5deece60-d2e0-46c9-9c66-d70a01722ba6\" (UID: \"5deece60-d2e0-46c9-9c66-d70a01722ba6\") " Jan 28 09:46:18 crc kubenswrapper[4735]: I0128 09:46:18.504790 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5deece60-d2e0-46c9-9c66-d70a01722ba6-catalog-content\") pod \"5deece60-d2e0-46c9-9c66-d70a01722ba6\" (UID: \"5deece60-d2e0-46c9-9c66-d70a01722ba6\") " Jan 28 09:46:18 crc kubenswrapper[4735]: I0128 09:46:18.505690 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5deece60-d2e0-46c9-9c66-d70a01722ba6-utilities" (OuterVolumeSpecName: "utilities") pod "5deece60-d2e0-46c9-9c66-d70a01722ba6" (UID: "5deece60-d2e0-46c9-9c66-d70a01722ba6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:46:18 crc kubenswrapper[4735]: I0128 09:46:18.509984 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5deece60-d2e0-46c9-9c66-d70a01722ba6-kube-api-access-bwmms" (OuterVolumeSpecName: "kube-api-access-bwmms") pod "5deece60-d2e0-46c9-9c66-d70a01722ba6" (UID: "5deece60-d2e0-46c9-9c66-d70a01722ba6"). InnerVolumeSpecName "kube-api-access-bwmms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:46:18 crc kubenswrapper[4735]: I0128 09:46:18.562224 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5deece60-d2e0-46c9-9c66-d70a01722ba6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5deece60-d2e0-46c9-9c66-d70a01722ba6" (UID: "5deece60-d2e0-46c9-9c66-d70a01722ba6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:46:18 crc kubenswrapper[4735]: I0128 09:46:18.606922 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5deece60-d2e0-46c9-9c66-d70a01722ba6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:18 crc kubenswrapper[4735]: I0128 09:46:18.606969 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5deece60-d2e0-46c9-9c66-d70a01722ba6-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:18 crc kubenswrapper[4735]: I0128 09:46:18.606982 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwmms\" (UniqueName: \"kubernetes.io/projected/5deece60-d2e0-46c9-9c66-d70a01722ba6-kube-api-access-bwmms\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:19 crc kubenswrapper[4735]: I0128 09:46:19.034143 4735 generic.go:334] "Generic (PLEG): container finished" podID="5deece60-d2e0-46c9-9c66-d70a01722ba6" containerID="167013ac36df76994a265daa4925e1cf67001a515bbf57ab8b4d87a714325361" exitCode=0 Jan 28 09:46:19 crc kubenswrapper[4735]: I0128 09:46:19.034190 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrpjd" event={"ID":"5deece60-d2e0-46c9-9c66-d70a01722ba6","Type":"ContainerDied","Data":"167013ac36df76994a265daa4925e1cf67001a515bbf57ab8b4d87a714325361"} Jan 28 09:46:19 crc kubenswrapper[4735]: I0128 09:46:19.034224 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrpjd" event={"ID":"5deece60-d2e0-46c9-9c66-d70a01722ba6","Type":"ContainerDied","Data":"4160d9190b1ffbfeadacb2f41a5441e1f6341ffcf38741cbd0cd58f7fd6d10bb"} Jan 28 09:46:19 crc kubenswrapper[4735]: I0128 09:46:19.034245 4735 scope.go:117] "RemoveContainer" containerID="167013ac36df76994a265daa4925e1cf67001a515bbf57ab8b4d87a714325361" Jan 28 09:46:19 crc kubenswrapper[4735]: I0128 09:46:19.034247 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrpjd" Jan 28 09:46:19 crc kubenswrapper[4735]: I0128 09:46:19.060162 4735 scope.go:117] "RemoveContainer" containerID="06f9ecb051d65091ce154a15981b909c66ec483bc927680d01107656bcca8dad" Jan 28 09:46:19 crc kubenswrapper[4735]: I0128 09:46:19.094984 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rrpjd"] Jan 28 09:46:19 crc kubenswrapper[4735]: I0128 09:46:19.100900 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rrpjd"] Jan 28 09:46:19 crc kubenswrapper[4735]: I0128 09:46:19.112416 4735 scope.go:117] "RemoveContainer" containerID="b86bd64315e20cc65c2a81663b5862330fb1add9a7ee5bd24f152f05201e49bf" Jan 28 09:46:19 crc kubenswrapper[4735]: I0128 09:46:19.135083 4735 scope.go:117] "RemoveContainer" containerID="167013ac36df76994a265daa4925e1cf67001a515bbf57ab8b4d87a714325361" Jan 28 09:46:19 crc kubenswrapper[4735]: E0128 09:46:19.136610 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"167013ac36df76994a265daa4925e1cf67001a515bbf57ab8b4d87a714325361\": container with ID starting with 167013ac36df76994a265daa4925e1cf67001a515bbf57ab8b4d87a714325361 not found: ID does not exist" containerID="167013ac36df76994a265daa4925e1cf67001a515bbf57ab8b4d87a714325361" Jan 28 09:46:19 crc kubenswrapper[4735]: I0128 09:46:19.136668 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"167013ac36df76994a265daa4925e1cf67001a515bbf57ab8b4d87a714325361"} err="failed to get container status \"167013ac36df76994a265daa4925e1cf67001a515bbf57ab8b4d87a714325361\": rpc error: code = NotFound desc = could not find container \"167013ac36df76994a265daa4925e1cf67001a515bbf57ab8b4d87a714325361\": container with ID starting with 167013ac36df76994a265daa4925e1cf67001a515bbf57ab8b4d87a714325361 not found: ID does not exist" Jan 28 09:46:19 crc kubenswrapper[4735]: I0128 09:46:19.136701 4735 scope.go:117] "RemoveContainer" containerID="06f9ecb051d65091ce154a15981b909c66ec483bc927680d01107656bcca8dad" Jan 28 09:46:19 crc kubenswrapper[4735]: E0128 09:46:19.137484 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06f9ecb051d65091ce154a15981b909c66ec483bc927680d01107656bcca8dad\": container with ID starting with 06f9ecb051d65091ce154a15981b909c66ec483bc927680d01107656bcca8dad not found: ID does not exist" containerID="06f9ecb051d65091ce154a15981b909c66ec483bc927680d01107656bcca8dad" Jan 28 09:46:19 crc kubenswrapper[4735]: I0128 09:46:19.137516 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f9ecb051d65091ce154a15981b909c66ec483bc927680d01107656bcca8dad"} err="failed to get container status \"06f9ecb051d65091ce154a15981b909c66ec483bc927680d01107656bcca8dad\": rpc error: code = NotFound desc = could not find container \"06f9ecb051d65091ce154a15981b909c66ec483bc927680d01107656bcca8dad\": container with ID starting with 06f9ecb051d65091ce154a15981b909c66ec483bc927680d01107656bcca8dad not found: ID does not exist" Jan 28 09:46:19 crc kubenswrapper[4735]: I0128 09:46:19.137542 4735 scope.go:117] "RemoveContainer" containerID="b86bd64315e20cc65c2a81663b5862330fb1add9a7ee5bd24f152f05201e49bf" Jan 28 09:46:19 crc kubenswrapper[4735]: E0128 09:46:19.137956 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b86bd64315e20cc65c2a81663b5862330fb1add9a7ee5bd24f152f05201e49bf\": container with ID starting with b86bd64315e20cc65c2a81663b5862330fb1add9a7ee5bd24f152f05201e49bf not found: ID does not exist" containerID="b86bd64315e20cc65c2a81663b5862330fb1add9a7ee5bd24f152f05201e49bf" Jan 28 09:46:19 crc kubenswrapper[4735]: I0128 09:46:19.137981 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b86bd64315e20cc65c2a81663b5862330fb1add9a7ee5bd24f152f05201e49bf"} err="failed to get container status \"b86bd64315e20cc65c2a81663b5862330fb1add9a7ee5bd24f152f05201e49bf\": rpc error: code = NotFound desc = could not find container \"b86bd64315e20cc65c2a81663b5862330fb1add9a7ee5bd24f152f05201e49bf\": container with ID starting with b86bd64315e20cc65c2a81663b5862330fb1add9a7ee5bd24f152f05201e49bf not found: ID does not exist" Jan 28 09:46:19 crc kubenswrapper[4735]: I0128 09:46:19.516800 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5deece60-d2e0-46c9-9c66-d70a01722ba6" path="/var/lib/kubelet/pods/5deece60-d2e0-46c9-9c66-d70a01722ba6/volumes" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.691186 4735 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.691848 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777b99db-3c0c-444d-ad0c-60e58359f6ad" containerName="registry-server" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.691870 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="777b99db-3c0c-444d-ad0c-60e58359f6ad" containerName="registry-server" Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.691889 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5deece60-d2e0-46c9-9c66-d70a01722ba6" containerName="registry-server" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.691901 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="5deece60-d2e0-46c9-9c66-d70a01722ba6" containerName="registry-server" Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.691923 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5deece60-d2e0-46c9-9c66-d70a01722ba6" containerName="extract-utilities" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.691935 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="5deece60-d2e0-46c9-9c66-d70a01722ba6" containerName="extract-utilities" Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.691947 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5deece60-d2e0-46c9-9c66-d70a01722ba6" containerName="extract-content" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.691959 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="5deece60-d2e0-46c9-9c66-d70a01722ba6" containerName="extract-content" Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.691984 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa60863-d388-433c-aa18-a2e7c4078c9a" containerName="registry-server" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.691999 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa60863-d388-433c-aa18-a2e7c4078c9a" containerName="registry-server" Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.692019 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777b99db-3c0c-444d-ad0c-60e58359f6ad" containerName="extract-utilities" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.692032 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="777b99db-3c0c-444d-ad0c-60e58359f6ad" containerName="extract-utilities" Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.692046 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a150b50e-b4b0-40a5-8947-aabedfc6fa74" containerName="registry-server" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.692058 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="a150b50e-b4b0-40a5-8947-aabedfc6fa74" containerName="registry-server" Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.692076 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa60863-d388-433c-aa18-a2e7c4078c9a" containerName="extract-content" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.692088 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa60863-d388-433c-aa18-a2e7c4078c9a" containerName="extract-content" Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.692106 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a150b50e-b4b0-40a5-8947-aabedfc6fa74" containerName="extract-content" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.692117 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="a150b50e-b4b0-40a5-8947-aabedfc6fa74" containerName="extract-content" Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.692135 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777b99db-3c0c-444d-ad0c-60e58359f6ad" containerName="extract-content" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.692147 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="777b99db-3c0c-444d-ad0c-60e58359f6ad" containerName="extract-content" Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.692165 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa60863-d388-433c-aa18-a2e7c4078c9a" containerName="extract-utilities" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.692176 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa60863-d388-433c-aa18-a2e7c4078c9a" containerName="extract-utilities" Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.692196 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a150b50e-b4b0-40a5-8947-aabedfc6fa74" containerName="extract-utilities" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.692208 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="a150b50e-b4b0-40a5-8947-aabedfc6fa74" containerName="extract-utilities" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.692355 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="5deece60-d2e0-46c9-9c66-d70a01722ba6" containerName="registry-server" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.692376 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffa60863-d388-433c-aa18-a2e7c4078c9a" containerName="registry-server" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.692395 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="a150b50e-b4b0-40a5-8947-aabedfc6fa74" containerName="registry-server" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.692418 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="777b99db-3c0c-444d-ad0c-60e58359f6ad" containerName="registry-server" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.692953 4735 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.693150 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.693379 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173" gracePeriod=15 Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.694096 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9" gracePeriod=15 Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.694188 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0" gracePeriod=15 Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.694250 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5" gracePeriod=15 Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.694272 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f" gracePeriod=15 Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.697177 4735 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.697715 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.697742 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.697793 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.697809 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.697835 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.697853 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.697874 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.697889 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.697912 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.697928 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.697953 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.697972 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.698183 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.698207 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.698229 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.698264 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.698307 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.698330 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 09:46:23 crc kubenswrapper[4735]: E0128 09:46:23.698557 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.698592 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.736423 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.771934 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.772057 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.772093 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.772123 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.772155 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.772181 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.772234 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.772260 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.873942 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.874009 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.874035 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.874067 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.874091 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.874106 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.874126 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.874143 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.874205 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.874241 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.874265 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.874285 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.874326 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.874343 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.874365 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:23 crc kubenswrapper[4735]: I0128 09:46:23.874386 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:46:24 crc kubenswrapper[4735]: I0128 09:46:24.030844 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:46:24 crc kubenswrapper[4735]: W0128 09:46:24.063518 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-63170146ee62f0afc6083c9427d8831426fb1b5d5f75a42c57ab7ac21314469b WatchSource:0}: Error finding container 63170146ee62f0afc6083c9427d8831426fb1b5d5f75a42c57ab7ac21314469b: Status 404 returned error can't find the container with id 63170146ee62f0afc6083c9427d8831426fb1b5d5f75a42c57ab7ac21314469b Jan 28 09:46:24 crc kubenswrapper[4735]: E0128 09:46:24.068936 4735 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.236:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188edbfd2f2b9337 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 09:46:24.067777335 +0000 UTC m=+237.217441758,LastTimestamp:2026-01-28 09:46:24.067777335 +0000 UTC m=+237.217441758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 09:46:24 crc kubenswrapper[4735]: I0128 09:46:24.072577 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 09:46:24 crc kubenswrapper[4735]: I0128 09:46:24.078938 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 09:46:24 crc kubenswrapper[4735]: I0128 09:46:24.080509 4735 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0" exitCode=0 Jan 28 09:46:24 crc kubenswrapper[4735]: I0128 09:46:24.080538 4735 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5" exitCode=2 Jan 28 09:46:24 crc kubenswrapper[4735]: I0128 09:46:24.229236 4735 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 28 09:46:24 crc kubenswrapper[4735]: I0128 09:46:24.229585 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 28 09:46:24 crc kubenswrapper[4735]: E0128 09:46:24.601462 4735 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.236:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188edbfd2f2b9337 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 09:46:24.067777335 +0000 UTC m=+237.217441758,LastTimestamp:2026-01-28 09:46:24.067777335 +0000 UTC m=+237.217441758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 09:46:25 crc kubenswrapper[4735]: I0128 09:46:25.094517 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"63170146ee62f0afc6083c9427d8831426fb1b5d5f75a42c57ab7ac21314469b"} Jan 28 09:46:25 crc kubenswrapper[4735]: I0128 09:46:25.097253 4735 generic.go:334] "Generic (PLEG): container finished" podID="36a77118-849f-4578-96fb-970de9c7128f" containerID="cdbd6e2e0f8ee46da466e931deb320ab86d0db5c23250ef1b032ed1cde671784" exitCode=0 Jan 28 09:46:25 crc kubenswrapper[4735]: I0128 09:46:25.097370 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"36a77118-849f-4578-96fb-970de9c7128f","Type":"ContainerDied","Data":"cdbd6e2e0f8ee46da466e931deb320ab86d0db5c23250ef1b032ed1cde671784"} Jan 28 09:46:25 crc kubenswrapper[4735]: I0128 09:46:25.098253 4735 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:25 crc kubenswrapper[4735]: I0128 09:46:25.099082 4735 status_manager.go:851] "Failed to get status for pod" podUID="36a77118-849f-4578-96fb-970de9c7128f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:25 crc kubenswrapper[4735]: I0128 09:46:25.099658 4735 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:25 crc kubenswrapper[4735]: I0128 09:46:25.100836 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 09:46:25 crc kubenswrapper[4735]: I0128 09:46:25.102641 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 09:46:25 crc kubenswrapper[4735]: I0128 09:46:25.103657 4735 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9" exitCode=0 Jan 28 09:46:25 crc kubenswrapper[4735]: I0128 09:46:25.103697 4735 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f" exitCode=0 Jan 28 09:46:25 crc kubenswrapper[4735]: I0128 09:46:25.103770 4735 scope.go:117] "RemoveContainer" containerID="c813b5ca88a01710711c9c39f1fd96e8740951e432d3fe234a3cc1a9b5cb8eb7" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.111971 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.114656 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e8fc4948e00c645be24bbbc341c1d5516d322a5e738f5e881ab09894c80e52ec"} Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.121286 4735 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.121580 4735 status_manager.go:851] "Failed to get status for pod" podUID="36a77118-849f-4578-96fb-970de9c7128f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.450371 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.451346 4735 status_manager.go:851] "Failed to get status for pod" podUID="36a77118-849f-4578-96fb-970de9c7128f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.451815 4735 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.512469 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36a77118-849f-4578-96fb-970de9c7128f-kube-api-access\") pod \"36a77118-849f-4578-96fb-970de9c7128f\" (UID: \"36a77118-849f-4578-96fb-970de9c7128f\") " Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.512607 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36a77118-849f-4578-96fb-970de9c7128f-var-lock\") pod \"36a77118-849f-4578-96fb-970de9c7128f\" (UID: \"36a77118-849f-4578-96fb-970de9c7128f\") " Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.512646 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36a77118-849f-4578-96fb-970de9c7128f-var-lock" (OuterVolumeSpecName: "var-lock") pod "36a77118-849f-4578-96fb-970de9c7128f" (UID: "36a77118-849f-4578-96fb-970de9c7128f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.512730 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36a77118-849f-4578-96fb-970de9c7128f-kubelet-dir\") pod \"36a77118-849f-4578-96fb-970de9c7128f\" (UID: \"36a77118-849f-4578-96fb-970de9c7128f\") " Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.512779 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36a77118-849f-4578-96fb-970de9c7128f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "36a77118-849f-4578-96fb-970de9c7128f" (UID: "36a77118-849f-4578-96fb-970de9c7128f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.513048 4735 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36a77118-849f-4578-96fb-970de9c7128f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.513065 4735 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/36a77118-849f-4578-96fb-970de9c7128f-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.519025 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36a77118-849f-4578-96fb-970de9c7128f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "36a77118-849f-4578-96fb-970de9c7128f" (UID: "36a77118-849f-4578-96fb-970de9c7128f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.616144 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36a77118-849f-4578-96fb-970de9c7128f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.881171 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.882255 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.882900 4735 status_manager.go:851] "Failed to get status for pod" podUID="36a77118-849f-4578-96fb-970de9c7128f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.883156 4735 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.883436 4735 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.918898 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.919188 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.919381 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.919018 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.919504 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.919605 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.920000 4735 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.920134 4735 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.920248 4735 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:26 crc kubenswrapper[4735]: E0128 09:46:26.934256 4735 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:26 crc kubenswrapper[4735]: E0128 09:46:26.934532 4735 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:26 crc kubenswrapper[4735]: E0128 09:46:26.934838 4735 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:26 crc kubenswrapper[4735]: E0128 09:46:26.935253 4735 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:26 crc kubenswrapper[4735]: E0128 09:46:26.935530 4735 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:26 crc kubenswrapper[4735]: I0128 09:46:26.935564 4735 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 28 09:46:26 crc kubenswrapper[4735]: E0128 09:46:26.935844 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="200ms" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.124037 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.124738 4735 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173" exitCode=0 Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.124842 4735 scope.go:117] "RemoveContainer" containerID="6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.124931 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.128179 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"36a77118-849f-4578-96fb-970de9c7128f","Type":"ContainerDied","Data":"9bef68f88e9a3111db833c5d49568efbd21962d70a024b69786eed06e804e6d7"} Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.128212 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.128222 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bef68f88e9a3111db833c5d49568efbd21962d70a024b69786eed06e804e6d7" Jan 28 09:46:27 crc kubenswrapper[4735]: E0128 09:46:27.136871 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="400ms" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.145679 4735 scope.go:117] "RemoveContainer" containerID="85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.149291 4735 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.149547 4735 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.149768 4735 status_manager.go:851] "Failed to get status for pod" podUID="36a77118-849f-4578-96fb-970de9c7128f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.150153 4735 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.150461 4735 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.151027 4735 status_manager.go:851] "Failed to get status for pod" podUID="36a77118-849f-4578-96fb-970de9c7128f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.161482 4735 scope.go:117] "RemoveContainer" containerID="b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.179085 4735 scope.go:117] "RemoveContainer" containerID="30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.193903 4735 scope.go:117] "RemoveContainer" containerID="4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.218564 4735 scope.go:117] "RemoveContainer" containerID="aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.242128 4735 scope.go:117] "RemoveContainer" containerID="6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9" Jan 28 09:46:27 crc kubenswrapper[4735]: E0128 09:46:27.242607 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\": container with ID starting with 6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9 not found: ID does not exist" containerID="6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.242654 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9"} err="failed to get container status \"6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\": rpc error: code = NotFound desc = could not find container \"6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9\": container with ID starting with 6a808ad39f5dc5e51a169268d15b4bc9c2bcddd0a241753b7dc83b26d98eb1a9 not found: ID does not exist" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.242687 4735 scope.go:117] "RemoveContainer" containerID="85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0" Jan 28 09:46:27 crc kubenswrapper[4735]: E0128 09:46:27.243235 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\": container with ID starting with 85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0 not found: ID does not exist" containerID="85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.243257 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0"} err="failed to get container status \"85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\": rpc error: code = NotFound desc = could not find container \"85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0\": container with ID starting with 85b9f362c89e268cc574c8ef11c8d57f22571eec8c8cb4e3f8f99a61c69527e0 not found: ID does not exist" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.243272 4735 scope.go:117] "RemoveContainer" containerID="b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f" Jan 28 09:46:27 crc kubenswrapper[4735]: E0128 09:46:27.243487 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\": container with ID starting with b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f not found: ID does not exist" containerID="b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.243507 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f"} err="failed to get container status \"b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\": rpc error: code = NotFound desc = could not find container \"b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f\": container with ID starting with b2624b013266cfbfa3ac088c6ad59871e58f57aa4dc9a11af5bff46dbf71862f not found: ID does not exist" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.243520 4735 scope.go:117] "RemoveContainer" containerID="30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5" Jan 28 09:46:27 crc kubenswrapper[4735]: E0128 09:46:27.243703 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\": container with ID starting with 30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5 not found: ID does not exist" containerID="30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.243717 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5"} err="failed to get container status \"30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\": rpc error: code = NotFound desc = could not find container \"30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5\": container with ID starting with 30f42bd9a122fb6144f52f5ebebd519939ca9ed9fdab398195fa0cdc8b4db2f5 not found: ID does not exist" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.243727 4735 scope.go:117] "RemoveContainer" containerID="4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173" Jan 28 09:46:27 crc kubenswrapper[4735]: E0128 09:46:27.244436 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\": container with ID starting with 4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173 not found: ID does not exist" containerID="4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.244455 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173"} err="failed to get container status \"4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\": rpc error: code = NotFound desc = could not find container \"4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173\": container with ID starting with 4e17337a4e0aeba2eed22d8da3024f59be10d19cbef80390b52b31a155447173 not found: ID does not exist" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.244467 4735 scope.go:117] "RemoveContainer" containerID="aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b" Jan 28 09:46:27 crc kubenswrapper[4735]: E0128 09:46:27.244831 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\": container with ID starting with aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b not found: ID does not exist" containerID="aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.244851 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b"} err="failed to get container status \"aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\": rpc error: code = NotFound desc = could not find container \"aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b\": container with ID starting with aa6af84662068545a697141b6075996e68b1aee384bf01560f2131bdaae5e58b not found: ID does not exist" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.509820 4735 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.512871 4735 status_manager.go:851] "Failed to get status for pod" podUID="36a77118-849f-4578-96fb-970de9c7128f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.527597 4735 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:27 crc kubenswrapper[4735]: I0128 09:46:27.530228 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 28 09:46:27 crc kubenswrapper[4735]: E0128 09:46:27.538373 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="800ms" Jan 28 09:46:28 crc kubenswrapper[4735]: E0128 09:46:28.339483 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="1.6s" Jan 28 09:46:29 crc kubenswrapper[4735]: E0128 09:46:29.940315 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="3.2s" Jan 28 09:46:33 crc kubenswrapper[4735]: E0128 09:46:33.141726 4735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.236:6443: connect: connection refused" interval="6.4s" Jan 28 09:46:34 crc kubenswrapper[4735]: E0128 09:46:34.602943 4735 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.236:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188edbfd2f2b9337 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 09:46:24.067777335 +0000 UTC m=+237.217441758,LastTimestamp:2026-01-28 09:46:24.067777335 +0000 UTC m=+237.217441758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 09:46:36 crc kubenswrapper[4735]: I0128 09:46:36.502207 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:36 crc kubenswrapper[4735]: I0128 09:46:36.503867 4735 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:36 crc kubenswrapper[4735]: I0128 09:46:36.504405 4735 status_manager.go:851] "Failed to get status for pod" podUID="36a77118-849f-4578-96fb-970de9c7128f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:36 crc kubenswrapper[4735]: I0128 09:46:36.527175 4735 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9420f813-d753-4228-9e17-c1123fa7a37b" Jan 28 09:46:36 crc kubenswrapper[4735]: I0128 09:46:36.527203 4735 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9420f813-d753-4228-9e17-c1123fa7a37b" Jan 28 09:46:36 crc kubenswrapper[4735]: E0128 09:46:36.527728 4735 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:36 crc kubenswrapper[4735]: I0128 09:46:36.528330 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:37 crc kubenswrapper[4735]: I0128 09:46:37.192201 4735 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8bc4bd3c9f9efb4b5f67f419596c7f8ef05b1c544ca7fcb231b022751f89b0c1" exitCode=0 Jan 28 09:46:37 crc kubenswrapper[4735]: I0128 09:46:37.192375 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8bc4bd3c9f9efb4b5f67f419596c7f8ef05b1c544ca7fcb231b022751f89b0c1"} Jan 28 09:46:37 crc kubenswrapper[4735]: I0128 09:46:37.193119 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"33110b122b56c92cad3456e3ab4ee3d74608ecfc298dad3fab0a964b7990f842"} Jan 28 09:46:37 crc kubenswrapper[4735]: I0128 09:46:37.193673 4735 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9420f813-d753-4228-9e17-c1123fa7a37b" Jan 28 09:46:37 crc kubenswrapper[4735]: I0128 09:46:37.193897 4735 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9420f813-d753-4228-9e17-c1123fa7a37b" Jan 28 09:46:37 crc kubenswrapper[4735]: I0128 09:46:37.194296 4735 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:37 crc kubenswrapper[4735]: E0128 09:46:37.194731 4735 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:37 crc kubenswrapper[4735]: I0128 09:46:37.196296 4735 status_manager.go:851] "Failed to get status for pod" podUID="36a77118-849f-4578-96fb-970de9c7128f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:37 crc kubenswrapper[4735]: I0128 09:46:37.507793 4735 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:37 crc kubenswrapper[4735]: I0128 09:46:37.508000 4735 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:37 crc kubenswrapper[4735]: I0128 09:46:37.508413 4735 status_manager.go:851] "Failed to get status for pod" podUID="36a77118-849f-4578-96fb-970de9c7128f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.236:6443: connect: connection refused" Jan 28 09:46:38 crc kubenswrapper[4735]: I0128 09:46:38.210598 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 09:46:38 crc kubenswrapper[4735]: I0128 09:46:38.210968 4735 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118" exitCode=1 Jan 28 09:46:38 crc kubenswrapper[4735]: I0128 09:46:38.211063 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118"} Jan 28 09:46:38 crc kubenswrapper[4735]: I0128 09:46:38.211639 4735 scope.go:117] "RemoveContainer" containerID="b1020fbfa0d426254481c7eaa62345f2501207f5c4bf53c54217c5b638bf7118" Jan 28 09:46:38 crc kubenswrapper[4735]: I0128 09:46:38.220853 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"05bdf4de2b99bd0b0e04e9d843ed571d323674061af714987eeb365143c7f7e6"} Jan 28 09:46:38 crc kubenswrapper[4735]: I0128 09:46:38.220892 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"efc712acc83318c63747d89ec23a2dd956dcf226cb00ebddcc7f297f5df36f2b"} Jan 28 09:46:38 crc kubenswrapper[4735]: I0128 09:46:38.220902 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1ba273ec207410232b31b57637ff7a7bede29ac44e6ffb08fb85428cc88686e7"} Jan 28 09:46:39 crc kubenswrapper[4735]: I0128 09:46:39.228095 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9e3e4d872e330caff213f3cdb68833f4cc908841c469d4ad4b888c4bf37523aa"} Jan 28 09:46:39 crc kubenswrapper[4735]: I0128 09:46:39.228139 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0fafdef00eca0dce4563c0059cc82ad1285e09c2b4f9de137b21f8794543f0d9"} Jan 28 09:46:39 crc kubenswrapper[4735]: I0128 09:46:39.228269 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:39 crc kubenswrapper[4735]: I0128 09:46:39.228396 4735 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9420f813-d753-4228-9e17-c1123fa7a37b" Jan 28 09:46:39 crc kubenswrapper[4735]: I0128 09:46:39.228415 4735 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9420f813-d753-4228-9e17-c1123fa7a37b" Jan 28 09:46:39 crc kubenswrapper[4735]: I0128 09:46:39.230395 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 09:46:39 crc kubenswrapper[4735]: I0128 09:46:39.230439 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1d3b55df4dc487895b9b05b9e23f1734c4eec7a4498c88a985b1236fc7019344"} Jan 28 09:46:41 crc kubenswrapper[4735]: I0128 09:46:41.529025 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:41 crc kubenswrapper[4735]: I0128 09:46:41.529329 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:41 crc kubenswrapper[4735]: I0128 09:46:41.534740 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:42 crc kubenswrapper[4735]: I0128 09:46:42.579941 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:46:42 crc kubenswrapper[4735]: I0128 09:46:42.872778 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" podUID="3c8d118a-3940-4efc-a7f3-16cb8de02990" containerName="oauth-openshift" containerID="cri-o://393a8412db035ac085bd7cc5440518a3da52729c0582895be7a332f6b4147270" gracePeriod=15 Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.254756 4735 generic.go:334] "Generic (PLEG): container finished" podID="3c8d118a-3940-4efc-a7f3-16cb8de02990" containerID="393a8412db035ac085bd7cc5440518a3da52729c0582895be7a332f6b4147270" exitCode=0 Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.254864 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" event={"ID":"3c8d118a-3940-4efc-a7f3-16cb8de02990","Type":"ContainerDied","Data":"393a8412db035ac085bd7cc5440518a3da52729c0582895be7a332f6b4147270"} Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.255237 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" event={"ID":"3c8d118a-3940-4efc-a7f3-16cb8de02990","Type":"ContainerDied","Data":"861d7d49f1f7d751020d4ea960ec62cde2b0a5b170232dc6734b3f53f95362b5"} Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.255261 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="861d7d49f1f7d751020d4ea960ec62cde2b0a5b170232dc6734b3f53f95362b5" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.279825 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.344304 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-router-certs\") pod \"3c8d118a-3940-4efc-a7f3-16cb8de02990\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.344395 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-ocp-branding-template\") pod \"3c8d118a-3940-4efc-a7f3-16cb8de02990\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.344443 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-provider-selection\") pod \"3c8d118a-3940-4efc-a7f3-16cb8de02990\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.344487 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c8d118a-3940-4efc-a7f3-16cb8de02990-audit-dir\") pod \"3c8d118a-3940-4efc-a7f3-16cb8de02990\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.344521 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-serving-cert\") pod \"3c8d118a-3940-4efc-a7f3-16cb8de02990\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.344565 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-trusted-ca-bundle\") pod \"3c8d118a-3940-4efc-a7f3-16cb8de02990\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.344591 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c8d118a-3940-4efc-a7f3-16cb8de02990-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3c8d118a-3940-4efc-a7f3-16cb8de02990" (UID: "3c8d118a-3940-4efc-a7f3-16cb8de02990"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.344620 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-error\") pod \"3c8d118a-3940-4efc-a7f3-16cb8de02990\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.344656 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-session\") pod \"3c8d118a-3940-4efc-a7f3-16cb8de02990\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.344697 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-login\") pod \"3c8d118a-3940-4efc-a7f3-16cb8de02990\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.344732 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmhlj\" (UniqueName: \"kubernetes.io/projected/3c8d118a-3940-4efc-a7f3-16cb8de02990-kube-api-access-xmhlj\") pod \"3c8d118a-3940-4efc-a7f3-16cb8de02990\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.344795 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-idp-0-file-data\") pod \"3c8d118a-3940-4efc-a7f3-16cb8de02990\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.344851 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-service-ca\") pod \"3c8d118a-3940-4efc-a7f3-16cb8de02990\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.344884 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-cliconfig\") pod \"3c8d118a-3940-4efc-a7f3-16cb8de02990\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.344920 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-audit-policies\") pod \"3c8d118a-3940-4efc-a7f3-16cb8de02990\" (UID: \"3c8d118a-3940-4efc-a7f3-16cb8de02990\") " Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.345205 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "3c8d118a-3940-4efc-a7f3-16cb8de02990" (UID: "3c8d118a-3940-4efc-a7f3-16cb8de02990"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.345221 4735 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c8d118a-3940-4efc-a7f3-16cb8de02990-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.345297 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "3c8d118a-3940-4efc-a7f3-16cb8de02990" (UID: "3c8d118a-3940-4efc-a7f3-16cb8de02990"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.345614 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "3c8d118a-3940-4efc-a7f3-16cb8de02990" (UID: "3c8d118a-3940-4efc-a7f3-16cb8de02990"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.345823 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "3c8d118a-3940-4efc-a7f3-16cb8de02990" (UID: "3c8d118a-3940-4efc-a7f3-16cb8de02990"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.351301 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "3c8d118a-3940-4efc-a7f3-16cb8de02990" (UID: "3c8d118a-3940-4efc-a7f3-16cb8de02990"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.351383 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8d118a-3940-4efc-a7f3-16cb8de02990-kube-api-access-xmhlj" (OuterVolumeSpecName: "kube-api-access-xmhlj") pod "3c8d118a-3940-4efc-a7f3-16cb8de02990" (UID: "3c8d118a-3940-4efc-a7f3-16cb8de02990"). InnerVolumeSpecName "kube-api-access-xmhlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.351643 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "3c8d118a-3940-4efc-a7f3-16cb8de02990" (UID: "3c8d118a-3940-4efc-a7f3-16cb8de02990"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.352093 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "3c8d118a-3940-4efc-a7f3-16cb8de02990" (UID: "3c8d118a-3940-4efc-a7f3-16cb8de02990"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.356913 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "3c8d118a-3940-4efc-a7f3-16cb8de02990" (UID: "3c8d118a-3940-4efc-a7f3-16cb8de02990"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.357045 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "3c8d118a-3940-4efc-a7f3-16cb8de02990" (UID: "3c8d118a-3940-4efc-a7f3-16cb8de02990"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.365113 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "3c8d118a-3940-4efc-a7f3-16cb8de02990" (UID: "3c8d118a-3940-4efc-a7f3-16cb8de02990"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.370191 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "3c8d118a-3940-4efc-a7f3-16cb8de02990" (UID: "3c8d118a-3940-4efc-a7f3-16cb8de02990"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.370468 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "3c8d118a-3940-4efc-a7f3-16cb8de02990" (UID: "3c8d118a-3940-4efc-a7f3-16cb8de02990"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.446691 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.446731 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.446741 4735 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.446770 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.446787 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.446799 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.446809 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.446817 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.446825 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.446861 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.446872 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.446881 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmhlj\" (UniqueName: \"kubernetes.io/projected/3c8d118a-3940-4efc-a7f3-16cb8de02990-kube-api-access-xmhlj\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:43 crc kubenswrapper[4735]: I0128 09:46:43.446891 4735 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3c8d118a-3940-4efc-a7f3-16cb8de02990-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 09:46:44 crc kubenswrapper[4735]: I0128 09:46:44.239168 4735 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:44 crc kubenswrapper[4735]: I0128 09:46:44.260507 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fhwqk" Jan 28 09:46:44 crc kubenswrapper[4735]: I0128 09:46:44.776383 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:46:44 crc kubenswrapper[4735]: I0128 09:46:44.781354 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:46:45 crc kubenswrapper[4735]: E0128 09:46:45.036120 4735 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Jan 28 09:46:45 crc kubenswrapper[4735]: I0128 09:46:45.266123 4735 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9420f813-d753-4228-9e17-c1123fa7a37b" Jan 28 09:46:45 crc kubenswrapper[4735]: I0128 09:46:45.266157 4735 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9420f813-d753-4228-9e17-c1123fa7a37b" Jan 28 09:46:45 crc kubenswrapper[4735]: I0128 09:46:45.272254 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:46:45 crc kubenswrapper[4735]: I0128 09:46:45.275921 4735 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="db28dc75-da2b-49b6-ad52-6a76768b455d" Jan 28 09:46:46 crc kubenswrapper[4735]: I0128 09:46:46.274128 4735 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9420f813-d753-4228-9e17-c1123fa7a37b" Jan 28 09:46:46 crc kubenswrapper[4735]: I0128 09:46:46.274517 4735 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9420f813-d753-4228-9e17-c1123fa7a37b" Jan 28 09:46:47 crc kubenswrapper[4735]: I0128 09:46:47.519094 4735 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="db28dc75-da2b-49b6-ad52-6a76768b455d" Jan 28 09:46:52 crc kubenswrapper[4735]: I0128 09:46:52.586432 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 09:46:54 crc kubenswrapper[4735]: I0128 09:46:54.062134 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 09:46:54 crc kubenswrapper[4735]: I0128 09:46:54.223665 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 09:46:54 crc kubenswrapper[4735]: I0128 09:46:54.508723 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 09:46:54 crc kubenswrapper[4735]: I0128 09:46:54.670536 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 09:46:54 crc kubenswrapper[4735]: I0128 09:46:54.792240 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 09:46:55 crc kubenswrapper[4735]: I0128 09:46:55.013042 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 09:46:55 crc kubenswrapper[4735]: I0128 09:46:55.438474 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 09:46:55 crc kubenswrapper[4735]: I0128 09:46:55.603388 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 09:46:55 crc kubenswrapper[4735]: I0128 09:46:55.985070 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 09:46:56 crc kubenswrapper[4735]: I0128 09:46:56.148642 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 09:46:56 crc kubenswrapper[4735]: I0128 09:46:56.311248 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 09:46:56 crc kubenswrapper[4735]: I0128 09:46:56.330301 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 09:46:56 crc kubenswrapper[4735]: I0128 09:46:56.582107 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 09:46:56 crc kubenswrapper[4735]: I0128 09:46:56.726651 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 09:46:56 crc kubenswrapper[4735]: I0128 09:46:56.917658 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 09:46:56 crc kubenswrapper[4735]: I0128 09:46:56.973321 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 09:46:56 crc kubenswrapper[4735]: I0128 09:46:56.984134 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 09:46:57 crc kubenswrapper[4735]: I0128 09:46:57.130360 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 09:46:57 crc kubenswrapper[4735]: I0128 09:46:57.440251 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 09:46:57 crc kubenswrapper[4735]: I0128 09:46:57.470007 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 09:46:57 crc kubenswrapper[4735]: I0128 09:46:57.536959 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 09:46:57 crc kubenswrapper[4735]: I0128 09:46:57.580429 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 09:46:57 crc kubenswrapper[4735]: I0128 09:46:57.609821 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 09:46:57 crc kubenswrapper[4735]: I0128 09:46:57.652802 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 09:46:57 crc kubenswrapper[4735]: I0128 09:46:57.704723 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 09:46:57 crc kubenswrapper[4735]: I0128 09:46:57.989717 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 09:46:58 crc kubenswrapper[4735]: I0128 09:46:58.036096 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 09:46:58 crc kubenswrapper[4735]: I0128 09:46:58.073114 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 09:46:58 crc kubenswrapper[4735]: I0128 09:46:58.110392 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 09:46:58 crc kubenswrapper[4735]: I0128 09:46:58.134686 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 09:46:58 crc kubenswrapper[4735]: I0128 09:46:58.400207 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 09:46:58 crc kubenswrapper[4735]: I0128 09:46:58.513122 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 09:46:58 crc kubenswrapper[4735]: I0128 09:46:58.633570 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 09:46:58 crc kubenswrapper[4735]: I0128 09:46:58.802320 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.043539 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.094289 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.124002 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.141765 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.156116 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.167432 4735 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.169408 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.178736 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.209210 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.377023 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.377091 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.384944 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.402232 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.510437 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.528142 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.534362 4735 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.562128 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.615204 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.800451 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.849409 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.915457 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.949644 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 09:46:59 crc kubenswrapper[4735]: I0128 09:46:59.991159 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 09:47:00 crc kubenswrapper[4735]: I0128 09:47:00.104048 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 09:47:00 crc kubenswrapper[4735]: I0128 09:47:00.130260 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 09:47:00 crc kubenswrapper[4735]: I0128 09:47:00.222358 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 09:47:00 crc kubenswrapper[4735]: I0128 09:47:00.234652 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 09:47:00 crc kubenswrapper[4735]: I0128 09:47:00.245828 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 09:47:00 crc kubenswrapper[4735]: I0128 09:47:00.308677 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 09:47:00 crc kubenswrapper[4735]: I0128 09:47:00.445153 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 09:47:00 crc kubenswrapper[4735]: I0128 09:47:00.599619 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 09:47:00 crc kubenswrapper[4735]: I0128 09:47:00.621042 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 09:47:00 crc kubenswrapper[4735]: I0128 09:47:00.786452 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 09:47:00 crc kubenswrapper[4735]: I0128 09:47:00.877865 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 09:47:00 crc kubenswrapper[4735]: I0128 09:47:00.896273 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 09:47:00 crc kubenswrapper[4735]: I0128 09:47:00.907663 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 09:47:00 crc kubenswrapper[4735]: I0128 09:47:00.949704 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 09:47:00 crc kubenswrapper[4735]: I0128 09:47:00.986627 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 09:47:01 crc kubenswrapper[4735]: I0128 09:47:01.078637 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 09:47:01 crc kubenswrapper[4735]: I0128 09:47:01.150316 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 09:47:01 crc kubenswrapper[4735]: I0128 09:47:01.197679 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 09:47:01 crc kubenswrapper[4735]: I0128 09:47:01.209180 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 09:47:01 crc kubenswrapper[4735]: I0128 09:47:01.303987 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 09:47:01 crc kubenswrapper[4735]: I0128 09:47:01.419269 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 09:47:01 crc kubenswrapper[4735]: I0128 09:47:01.479685 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 09:47:01 crc kubenswrapper[4735]: I0128 09:47:01.541443 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 09:47:01 crc kubenswrapper[4735]: I0128 09:47:01.577619 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 09:47:01 crc kubenswrapper[4735]: I0128 09:47:01.683274 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 09:47:01 crc kubenswrapper[4735]: I0128 09:47:01.690034 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 09:47:01 crc kubenswrapper[4735]: I0128 09:47:01.727929 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 09:47:01 crc kubenswrapper[4735]: I0128 09:47:01.852005 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 09:47:01 crc kubenswrapper[4735]: I0128 09:47:01.984978 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.007725 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.012788 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.057691 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.152148 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.196908 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.279076 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.284601 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.324522 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.350611 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.397191 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.504199 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.609858 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.646125 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.765418 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.786706 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.810149 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.842180 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.883098 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.925078 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.949700 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.968520 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 09:47:02 crc kubenswrapper[4735]: I0128 09:47:02.975996 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.149818 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.168527 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.211659 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.377250 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.382790 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.385241 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.424395 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.459624 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.460041 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.476036 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.476838 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.558340 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.595098 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.596088 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.601238 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.624290 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.631689 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.644950 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.800352 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.859243 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 09:47:03 crc kubenswrapper[4735]: I0128 09:47:03.889518 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.067222 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.117710 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.145408 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.150235 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.215051 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.221406 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.362146 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.369001 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.596319 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.645234 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.649135 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.652146 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.666629 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.705417 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.706192 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.715650 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.721899 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.789968 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.832903 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.897343 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.902121 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.912456 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.926510 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.960551 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 09:47:04 crc kubenswrapper[4735]: I0128 09:47:04.977415 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.034206 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.162482 4735 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.164744 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=42.164727306 podStartE2EDuration="42.164727306s" podCreationTimestamp="2026-01-28 09:46:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:46:43.989494701 +0000 UTC m=+257.139159094" watchObservedRunningTime="2026-01-28 09:47:05.164727306 +0000 UTC m=+278.314391689" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.167289 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.167913 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-fhwqk"] Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.167977 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-758c4c8f95-rckd2"] Jan 28 09:47:05 crc kubenswrapper[4735]: E0128 09:47:05.168219 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a77118-849f-4578-96fb-970de9c7128f" containerName="installer" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.168244 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a77118-849f-4578-96fb-970de9c7128f" containerName="installer" Jan 28 09:47:05 crc kubenswrapper[4735]: E0128 09:47:05.168272 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8d118a-3940-4efc-a7f3-16cb8de02990" containerName="oauth-openshift" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.168348 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8d118a-3940-4efc-a7f3-16cb8de02990" containerName="oauth-openshift" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.168450 4735 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9420f813-d753-4228-9e17-c1123fa7a37b" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.168475 4735 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9420f813-d753-4228-9e17-c1123fa7a37b" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.168519 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="36a77118-849f-4578-96fb-970de9c7128f" containerName="installer" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.168543 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8d118a-3940-4efc-a7f3-16cb8de02990" containerName="oauth-openshift" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.169080 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.177107 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.177388 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.178930 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.179217 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.179368 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.179501 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.180547 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.181164 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.181220 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.181552 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.184284 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.184357 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.188537 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.189472 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.193476 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.198488 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.198427 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.198399355 podStartE2EDuration="21.198399355s" podCreationTimestamp="2026-01-28 09:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:47:05.198255841 +0000 UTC m=+278.347920224" watchObservedRunningTime="2026-01-28 09:47:05.198399355 +0000 UTC m=+278.348063768" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.252411 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.262452 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.308931 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.328621 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.346383 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-user-template-error\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.346438 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-audit-dir\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.346475 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.346504 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-user-template-login\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.346527 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.346544 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.346570 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppf9g\" (UniqueName: \"kubernetes.io/projected/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-kube-api-access-ppf9g\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.346596 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.346628 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-session\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.346653 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.346732 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-audit-policies\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.346812 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.346860 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.346882 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.404236 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.447699 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.447794 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-session\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.447816 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.447846 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-audit-policies\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.447873 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.447902 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.447922 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.447941 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-user-template-error\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.447961 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-audit-dir\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.447983 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.448007 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-user-template-login\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.448033 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.448060 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.448091 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppf9g\" (UniqueName: \"kubernetes.io/projected/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-kube-api-access-ppf9g\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.448913 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-audit-policies\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.449073 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.449278 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.449314 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.449458 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-audit-dir\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.455186 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-user-template-login\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.455202 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.455774 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.455987 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.457940 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-session\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.457991 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-user-template-error\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.455290 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.459299 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.464710 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppf9g\" (UniqueName: \"kubernetes.io/projected/5315ba88-9abe-48cc-a1e5-b0177d0f98f3-kube-api-access-ppf9g\") pod \"oauth-openshift-758c4c8f95-rckd2\" (UID: \"5315ba88-9abe-48cc-a1e5-b0177d0f98f3\") " pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.472897 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.498574 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.509004 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c8d118a-3940-4efc-a7f3-16cb8de02990" path="/var/lib/kubelet/pods/3c8d118a-3940-4efc-a7f3-16cb8de02990/volumes" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.658421 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.727369 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.862158 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.981116 4735 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 09:47:05 crc kubenswrapper[4735]: I0128 09:47:05.990378 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 09:47:06 crc kubenswrapper[4735]: I0128 09:47:06.045404 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 09:47:06 crc kubenswrapper[4735]: I0128 09:47:06.194323 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 09:47:06 crc kubenswrapper[4735]: I0128 09:47:06.227237 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 09:47:06 crc kubenswrapper[4735]: I0128 09:47:06.306650 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 09:47:06 crc kubenswrapper[4735]: I0128 09:47:06.383081 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 09:47:06 crc kubenswrapper[4735]: I0128 09:47:06.420233 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 09:47:06 crc kubenswrapper[4735]: I0128 09:47:06.526423 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 09:47:06 crc kubenswrapper[4735]: I0128 09:47:06.655502 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 09:47:06 crc kubenswrapper[4735]: I0128 09:47:06.657594 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 09:47:06 crc kubenswrapper[4735]: I0128 09:47:06.694437 4735 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 09:47:06 crc kubenswrapper[4735]: I0128 09:47:06.694692 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://e8fc4948e00c645be24bbbc341c1d5516d322a5e738f5e881ab09894c80e52ec" gracePeriod=5 Jan 28 09:47:06 crc kubenswrapper[4735]: I0128 09:47:06.728077 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 09:47:06 crc kubenswrapper[4735]: I0128 09:47:06.961534 4735 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 09:47:06 crc kubenswrapper[4735]: I0128 09:47:06.984315 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 09:47:07 crc kubenswrapper[4735]: I0128 09:47:07.017340 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 09:47:07 crc kubenswrapper[4735]: I0128 09:47:07.197856 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 09:47:07 crc kubenswrapper[4735]: I0128 09:47:07.203900 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 09:47:07 crc kubenswrapper[4735]: I0128 09:47:07.209541 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 09:47:07 crc kubenswrapper[4735]: I0128 09:47:07.293509 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 09:47:07 crc kubenswrapper[4735]: I0128 09:47:07.374732 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 09:47:07 crc kubenswrapper[4735]: I0128 09:47:07.402196 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 09:47:07 crc kubenswrapper[4735]: I0128 09:47:07.424129 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 09:47:07 crc kubenswrapper[4735]: I0128 09:47:07.631129 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 09:47:07 crc kubenswrapper[4735]: I0128 09:47:07.659678 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 09:47:07 crc kubenswrapper[4735]: I0128 09:47:07.770568 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 09:47:07 crc kubenswrapper[4735]: I0128 09:47:07.858005 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 09:47:07 crc kubenswrapper[4735]: I0128 09:47:07.927269 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 09:47:08 crc kubenswrapper[4735]: I0128 09:47:08.025459 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 09:47:08 crc kubenswrapper[4735]: I0128 09:47:08.127983 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 09:47:08 crc kubenswrapper[4735]: I0128 09:47:08.226292 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 09:47:08 crc kubenswrapper[4735]: I0128 09:47:08.254385 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 09:47:08 crc kubenswrapper[4735]: I0128 09:47:08.382634 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 09:47:08 crc kubenswrapper[4735]: I0128 09:47:08.405093 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 09:47:08 crc kubenswrapper[4735]: I0128 09:47:08.423833 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 09:47:08 crc kubenswrapper[4735]: I0128 09:47:08.479638 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 09:47:08 crc kubenswrapper[4735]: I0128 09:47:08.542075 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 09:47:08 crc kubenswrapper[4735]: I0128 09:47:08.589031 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 09:47:08 crc kubenswrapper[4735]: I0128 09:47:08.711956 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 09:47:08 crc kubenswrapper[4735]: I0128 09:47:08.759388 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 09:47:08 crc kubenswrapper[4735]: I0128 09:47:08.779320 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 09:47:08 crc kubenswrapper[4735]: I0128 09:47:08.840962 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 09:47:08 crc kubenswrapper[4735]: I0128 09:47:08.956711 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 09:47:08 crc kubenswrapper[4735]: I0128 09:47:08.986867 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 09:47:09 crc kubenswrapper[4735]: I0128 09:47:09.003436 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 09:47:09 crc kubenswrapper[4735]: I0128 09:47:09.013537 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 09:47:09 crc kubenswrapper[4735]: I0128 09:47:09.102543 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 09:47:09 crc kubenswrapper[4735]: I0128 09:47:09.111650 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 09:47:09 crc kubenswrapper[4735]: I0128 09:47:09.165142 4735 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 09:47:09 crc kubenswrapper[4735]: I0128 09:47:09.166337 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 09:47:09 crc kubenswrapper[4735]: I0128 09:47:09.177116 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 09:47:09 crc kubenswrapper[4735]: I0128 09:47:09.321863 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 09:47:09 crc kubenswrapper[4735]: I0128 09:47:09.360004 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 09:47:09 crc kubenswrapper[4735]: I0128 09:47:09.436948 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 09:47:09 crc kubenswrapper[4735]: I0128 09:47:09.463385 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 09:47:09 crc kubenswrapper[4735]: I0128 09:47:09.616572 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 09:47:09 crc kubenswrapper[4735]: I0128 09:47:09.948629 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 09:47:10 crc kubenswrapper[4735]: I0128 09:47:10.089771 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 09:47:10 crc kubenswrapper[4735]: I0128 09:47:10.125474 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 09:47:10 crc kubenswrapper[4735]: I0128 09:47:10.335717 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 09:47:10 crc kubenswrapper[4735]: I0128 09:47:10.391561 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-758c4c8f95-rckd2"] Jan 28 09:47:10 crc kubenswrapper[4735]: I0128 09:47:10.527610 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-758c4c8f95-rckd2"] Jan 28 09:47:10 crc kubenswrapper[4735]: I0128 09:47:10.599259 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 09:47:10 crc kubenswrapper[4735]: I0128 09:47:10.783886 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 09:47:10 crc kubenswrapper[4735]: I0128 09:47:10.805694 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 09:47:11 crc kubenswrapper[4735]: I0128 09:47:11.020465 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 09:47:11 crc kubenswrapper[4735]: I0128 09:47:11.086487 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 09:47:11 crc kubenswrapper[4735]: I0128 09:47:11.187319 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 09:47:11 crc kubenswrapper[4735]: I0128 09:47:11.189264 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 09:47:11 crc kubenswrapper[4735]: I0128 09:47:11.305231 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 09:47:11 crc kubenswrapper[4735]: I0128 09:47:11.438474 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" event={"ID":"5315ba88-9abe-48cc-a1e5-b0177d0f98f3","Type":"ContainerStarted","Data":"384740111fd0dc302141725ba3755a3d16aae430e1f70daea47046911e9c93e0"} Jan 28 09:47:11 crc kubenswrapper[4735]: I0128 09:47:11.438529 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" event={"ID":"5315ba88-9abe-48cc-a1e5-b0177d0f98f3","Type":"ContainerStarted","Data":"ad5fda56d0f108b135246aa0d146c64ff4ba8a2036493be995ffb9f985fbb29f"} Jan 28 09:47:11 crc kubenswrapper[4735]: I0128 09:47:11.438670 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:11 crc kubenswrapper[4735]: I0128 09:47:11.446015 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" Jan 28 09:47:11 crc kubenswrapper[4735]: I0128 09:47:11.459212 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-758c4c8f95-rckd2" podStartSLOduration=54.459193449 podStartE2EDuration="54.459193449s" podCreationTimestamp="2026-01-28 09:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:47:11.457581645 +0000 UTC m=+284.607246028" watchObservedRunningTime="2026-01-28 09:47:11.459193449 +0000 UTC m=+284.608857842" Jan 28 09:47:11 crc kubenswrapper[4735]: I0128 09:47:11.918522 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.007392 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.258002 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.258071 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.443829 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.443895 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.443955 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.443990 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.444017 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.444092 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.444105 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.444335 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.444438 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.445673 4735 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.445806 4735 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.445845 4735 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.445916 4735 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.446593 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.446637 4735 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="e8fc4948e00c645be24bbbc341c1d5516d322a5e738f5e881ab09894c80e52ec" exitCode=137 Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.446842 4735 scope.go:117] "RemoveContainer" containerID="e8fc4948e00c645be24bbbc341c1d5516d322a5e738f5e881ab09894c80e52ec" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.447097 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.453843 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.518682 4735 scope.go:117] "RemoveContainer" containerID="e8fc4948e00c645be24bbbc341c1d5516d322a5e738f5e881ab09894c80e52ec" Jan 28 09:47:12 crc kubenswrapper[4735]: E0128 09:47:12.519155 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8fc4948e00c645be24bbbc341c1d5516d322a5e738f5e881ab09894c80e52ec\": container with ID starting with e8fc4948e00c645be24bbbc341c1d5516d322a5e738f5e881ab09894c80e52ec not found: ID does not exist" containerID="e8fc4948e00c645be24bbbc341c1d5516d322a5e738f5e881ab09894c80e52ec" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.519203 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8fc4948e00c645be24bbbc341c1d5516d322a5e738f5e881ab09894c80e52ec"} err="failed to get container status \"e8fc4948e00c645be24bbbc341c1d5516d322a5e738f5e881ab09894c80e52ec\": rpc error: code = NotFound desc = could not find container \"e8fc4948e00c645be24bbbc341c1d5516d322a5e738f5e881ab09894c80e52ec\": container with ID starting with e8fc4948e00c645be24bbbc341c1d5516d322a5e738f5e881ab09894c80e52ec not found: ID does not exist" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.548566 4735 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 09:47:12 crc kubenswrapper[4735]: I0128 09:47:12.947041 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 09:47:13 crc kubenswrapper[4735]: I0128 09:47:13.024899 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 09:47:13 crc kubenswrapper[4735]: I0128 09:47:13.511487 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 28 09:47:13 crc kubenswrapper[4735]: I0128 09:47:13.511932 4735 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 28 09:47:13 crc kubenswrapper[4735]: I0128 09:47:13.521595 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 09:47:13 crc kubenswrapper[4735]: I0128 09:47:13.521649 4735 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f8f3ee8d-b83b-49ff-8e58-911d5534f5d6" Jan 28 09:47:13 crc kubenswrapper[4735]: I0128 09:47:13.525213 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 09:47:13 crc kubenswrapper[4735]: I0128 09:47:13.525240 4735 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f8f3ee8d-b83b-49ff-8e58-911d5534f5d6" Jan 28 09:47:27 crc kubenswrapper[4735]: I0128 09:47:27.304593 4735 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 28 09:47:27 crc kubenswrapper[4735]: I0128 09:47:27.526994 4735 generic.go:334] "Generic (PLEG): container finished" podID="d9c6bba7-8f28-4afe-baa8-f070b3f0acfa" containerID="e76d9bcc7d840a7c255c309b1e86523644a67b721a85e90b556bfb8d8d1bd198" exitCode=0 Jan 28 09:47:27 crc kubenswrapper[4735]: I0128 09:47:27.527020 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" event={"ID":"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa","Type":"ContainerDied","Data":"e76d9bcc7d840a7c255c309b1e86523644a67b721a85e90b556bfb8d8d1bd198"} Jan 28 09:47:27 crc kubenswrapper[4735]: I0128 09:47:27.527582 4735 scope.go:117] "RemoveContainer" containerID="e76d9bcc7d840a7c255c309b1e86523644a67b721a85e90b556bfb8d8d1bd198" Jan 28 09:47:28 crc kubenswrapper[4735]: I0128 09:47:28.546004 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" event={"ID":"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa","Type":"ContainerStarted","Data":"8a9b6b5c1f95d843ad64b4b3ec2a9b5a0c609d02b15313343324a43d262f2257"} Jan 28 09:47:28 crc kubenswrapper[4735]: I0128 09:47:28.547316 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" Jan 28 09:47:28 crc kubenswrapper[4735]: I0128 09:47:28.553906 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" Jan 28 09:48:09 crc kubenswrapper[4735]: I0128 09:48:09.862920 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-knnrt"] Jan 28 09:48:09 crc kubenswrapper[4735]: I0128 09:48:09.863818 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" podUID="8dcd226c-c318-412c-9483-2f359c9a1b75" containerName="controller-manager" containerID="cri-o://7310c184b1ad189143156c26dac60948c24e63b143f56e24556b91fb681acba5" gracePeriod=30 Jan 28 09:48:09 crc kubenswrapper[4735]: I0128 09:48:09.951588 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5"] Jan 28 09:48:09 crc kubenswrapper[4735]: I0128 09:48:09.951835 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" podUID="e308a994-46ac-4ad4-9b44-f4e0cc910c81" containerName="route-controller-manager" containerID="cri-o://4bb93cc640376a3b9f3eabc7d6569f4088f61694220292c372ac3ffc05df6c69" gracePeriod=30 Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.338869 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.342432 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.365387 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e308a994-46ac-4ad4-9b44-f4e0cc910c81-client-ca\") pod \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\" (UID: \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\") " Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.365456 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcd226c-c318-412c-9483-2f359c9a1b75-serving-cert\") pod \"8dcd226c-c318-412c-9483-2f359c9a1b75\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.365513 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-proxy-ca-bundles\") pod \"8dcd226c-c318-412c-9483-2f359c9a1b75\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.365579 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-client-ca\") pod \"8dcd226c-c318-412c-9483-2f359c9a1b75\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.365605 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkwt5\" (UniqueName: \"kubernetes.io/projected/e308a994-46ac-4ad4-9b44-f4e0cc910c81-kube-api-access-dkwt5\") pod \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\" (UID: \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\") " Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.365640 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw8qv\" (UniqueName: \"kubernetes.io/projected/8dcd226c-c318-412c-9483-2f359c9a1b75-kube-api-access-rw8qv\") pod \"8dcd226c-c318-412c-9483-2f359c9a1b75\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.365686 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e308a994-46ac-4ad4-9b44-f4e0cc910c81-config\") pod \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\" (UID: \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\") " Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.365729 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-config\") pod \"8dcd226c-c318-412c-9483-2f359c9a1b75\" (UID: \"8dcd226c-c318-412c-9483-2f359c9a1b75\") " Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.365769 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e308a994-46ac-4ad4-9b44-f4e0cc910c81-serving-cert\") pod \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\" (UID: \"e308a994-46ac-4ad4-9b44-f4e0cc910c81\") " Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.366328 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e308a994-46ac-4ad4-9b44-f4e0cc910c81-client-ca" (OuterVolumeSpecName: "client-ca") pod "e308a994-46ac-4ad4-9b44-f4e0cc910c81" (UID: "e308a994-46ac-4ad4-9b44-f4e0cc910c81"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.366484 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e308a994-46ac-4ad4-9b44-f4e0cc910c81-config" (OuterVolumeSpecName: "config") pod "e308a994-46ac-4ad4-9b44-f4e0cc910c81" (UID: "e308a994-46ac-4ad4-9b44-f4e0cc910c81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.366793 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8dcd226c-c318-412c-9483-2f359c9a1b75" (UID: "8dcd226c-c318-412c-9483-2f359c9a1b75"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.366833 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-client-ca" (OuterVolumeSpecName: "client-ca") pod "8dcd226c-c318-412c-9483-2f359c9a1b75" (UID: "8dcd226c-c318-412c-9483-2f359c9a1b75"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.366992 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-config" (OuterVolumeSpecName: "config") pod "8dcd226c-c318-412c-9483-2f359c9a1b75" (UID: "8dcd226c-c318-412c-9483-2f359c9a1b75"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.378721 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcd226c-c318-412c-9483-2f359c9a1b75-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8dcd226c-c318-412c-9483-2f359c9a1b75" (UID: "8dcd226c-c318-412c-9483-2f359c9a1b75"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.379599 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e308a994-46ac-4ad4-9b44-f4e0cc910c81-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e308a994-46ac-4ad4-9b44-f4e0cc910c81" (UID: "e308a994-46ac-4ad4-9b44-f4e0cc910c81"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.379851 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dcd226c-c318-412c-9483-2f359c9a1b75-kube-api-access-rw8qv" (OuterVolumeSpecName: "kube-api-access-rw8qv") pod "8dcd226c-c318-412c-9483-2f359c9a1b75" (UID: "8dcd226c-c318-412c-9483-2f359c9a1b75"). InnerVolumeSpecName "kube-api-access-rw8qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.383830 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e308a994-46ac-4ad4-9b44-f4e0cc910c81-kube-api-access-dkwt5" (OuterVolumeSpecName: "kube-api-access-dkwt5") pod "e308a994-46ac-4ad4-9b44-f4e0cc910c81" (UID: "e308a994-46ac-4ad4-9b44-f4e0cc910c81"). InnerVolumeSpecName "kube-api-access-dkwt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.466881 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw8qv\" (UniqueName: \"kubernetes.io/projected/8dcd226c-c318-412c-9483-2f359c9a1b75-kube-api-access-rw8qv\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.466920 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e308a994-46ac-4ad4-9b44-f4e0cc910c81-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.466931 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.466941 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e308a994-46ac-4ad4-9b44-f4e0cc910c81-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.466952 4735 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e308a994-46ac-4ad4-9b44-f4e0cc910c81-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.466977 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcd226c-c318-412c-9483-2f359c9a1b75-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.466988 4735 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.466996 4735 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dcd226c-c318-412c-9483-2f359c9a1b75-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.467005 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkwt5\" (UniqueName: \"kubernetes.io/projected/e308a994-46ac-4ad4-9b44-f4e0cc910c81-kube-api-access-dkwt5\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.814623 4735 generic.go:334] "Generic (PLEG): container finished" podID="e308a994-46ac-4ad4-9b44-f4e0cc910c81" containerID="4bb93cc640376a3b9f3eabc7d6569f4088f61694220292c372ac3ffc05df6c69" exitCode=0 Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.814729 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.814732 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" event={"ID":"e308a994-46ac-4ad4-9b44-f4e0cc910c81","Type":"ContainerDied","Data":"4bb93cc640376a3b9f3eabc7d6569f4088f61694220292c372ac3ffc05df6c69"} Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.814970 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5" event={"ID":"e308a994-46ac-4ad4-9b44-f4e0cc910c81","Type":"ContainerDied","Data":"b944d89c49de71cc4998813c3bfa3d87cb5602f39b39b343fa7e1894adf9d01b"} Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.814991 4735 scope.go:117] "RemoveContainer" containerID="4bb93cc640376a3b9f3eabc7d6569f4088f61694220292c372ac3ffc05df6c69" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.816419 4735 generic.go:334] "Generic (PLEG): container finished" podID="8dcd226c-c318-412c-9483-2f359c9a1b75" containerID="7310c184b1ad189143156c26dac60948c24e63b143f56e24556b91fb681acba5" exitCode=0 Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.816510 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" event={"ID":"8dcd226c-c318-412c-9483-2f359c9a1b75","Type":"ContainerDied","Data":"7310c184b1ad189143156c26dac60948c24e63b143f56e24556b91fb681acba5"} Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.816589 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" event={"ID":"8dcd226c-c318-412c-9483-2f359c9a1b75","Type":"ContainerDied","Data":"6c11de2bdb1bb64baad26b7ec41fac4d8b19ef2a304d60208ae2ab60f71b4616"} Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.816691 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-knnrt" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.834247 4735 scope.go:117] "RemoveContainer" containerID="4bb93cc640376a3b9f3eabc7d6569f4088f61694220292c372ac3ffc05df6c69" Jan 28 09:48:10 crc kubenswrapper[4735]: E0128 09:48:10.834686 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bb93cc640376a3b9f3eabc7d6569f4088f61694220292c372ac3ffc05df6c69\": container with ID starting with 4bb93cc640376a3b9f3eabc7d6569f4088f61694220292c372ac3ffc05df6c69 not found: ID does not exist" containerID="4bb93cc640376a3b9f3eabc7d6569f4088f61694220292c372ac3ffc05df6c69" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.834721 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb93cc640376a3b9f3eabc7d6569f4088f61694220292c372ac3ffc05df6c69"} err="failed to get container status \"4bb93cc640376a3b9f3eabc7d6569f4088f61694220292c372ac3ffc05df6c69\": rpc error: code = NotFound desc = could not find container \"4bb93cc640376a3b9f3eabc7d6569f4088f61694220292c372ac3ffc05df6c69\": container with ID starting with 4bb93cc640376a3b9f3eabc7d6569f4088f61694220292c372ac3ffc05df6c69 not found: ID does not exist" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.834744 4735 scope.go:117] "RemoveContainer" containerID="7310c184b1ad189143156c26dac60948c24e63b143f56e24556b91fb681acba5" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.849910 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5"] Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.851781 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dnhz5"] Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.861013 4735 scope.go:117] "RemoveContainer" containerID="7310c184b1ad189143156c26dac60948c24e63b143f56e24556b91fb681acba5" Jan 28 09:48:10 crc kubenswrapper[4735]: E0128 09:48:10.861934 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7310c184b1ad189143156c26dac60948c24e63b143f56e24556b91fb681acba5\": container with ID starting with 7310c184b1ad189143156c26dac60948c24e63b143f56e24556b91fb681acba5 not found: ID does not exist" containerID="7310c184b1ad189143156c26dac60948c24e63b143f56e24556b91fb681acba5" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.861969 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7310c184b1ad189143156c26dac60948c24e63b143f56e24556b91fb681acba5"} err="failed to get container status \"7310c184b1ad189143156c26dac60948c24e63b143f56e24556b91fb681acba5\": rpc error: code = NotFound desc = could not find container \"7310c184b1ad189143156c26dac60948c24e63b143f56e24556b91fb681acba5\": container with ID starting with 7310c184b1ad189143156c26dac60948c24e63b143f56e24556b91fb681acba5 not found: ID does not exist" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.866140 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-knnrt"] Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.868736 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-knnrt"] Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.973521 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-dd876dc64-858k5"] Jan 28 09:48:10 crc kubenswrapper[4735]: E0128 09:48:10.973729 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e308a994-46ac-4ad4-9b44-f4e0cc910c81" containerName="route-controller-manager" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.973741 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e308a994-46ac-4ad4-9b44-f4e0cc910c81" containerName="route-controller-manager" Jan 28 09:48:10 crc kubenswrapper[4735]: E0128 09:48:10.973765 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dcd226c-c318-412c-9483-2f359c9a1b75" containerName="controller-manager" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.973771 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dcd226c-c318-412c-9483-2f359c9a1b75" containerName="controller-manager" Jan 28 09:48:10 crc kubenswrapper[4735]: E0128 09:48:10.973789 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.973796 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.973890 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.973905 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dcd226c-c318-412c-9483-2f359c9a1b75" containerName="controller-manager" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.973921 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e308a994-46ac-4ad4-9b44-f4e0cc910c81" containerName="route-controller-manager" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.974325 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.983721 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.983938 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.984168 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.984298 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.984396 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.984966 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dd876dc64-858k5"] Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.985803 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 09:48:10 crc kubenswrapper[4735]: I0128 09:48:10.991719 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.056946 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm"] Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.057759 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.061186 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.061230 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.061339 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.061451 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.061500 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.061710 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.072539 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm"] Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.074548 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db3f5953-1756-4c27-b4e0-84f2c89480b4-serving-cert\") pod \"controller-manager-dd876dc64-858k5\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.074625 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-config\") pod \"controller-manager-dd876dc64-858k5\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.074655 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-client-ca\") pod \"controller-manager-dd876dc64-858k5\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.074702 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48zx8\" (UniqueName: \"kubernetes.io/projected/db3f5953-1756-4c27-b4e0-84f2c89480b4-kube-api-access-48zx8\") pod \"controller-manager-dd876dc64-858k5\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.074732 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-proxy-ca-bundles\") pod \"controller-manager-dd876dc64-858k5\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.175445 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-config\") pod \"controller-manager-dd876dc64-858k5\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.175499 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-client-ca\") pod \"controller-manager-dd876dc64-858k5\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.175544 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcngw\" (UniqueName: \"kubernetes.io/projected/e421830d-fd9e-4b5c-80b9-b0415a932839-kube-api-access-lcngw\") pod \"route-controller-manager-86c5448d5f-cqtjm\" (UID: \"e421830d-fd9e-4b5c-80b9-b0415a932839\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.175577 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48zx8\" (UniqueName: \"kubernetes.io/projected/db3f5953-1756-4c27-b4e0-84f2c89480b4-kube-api-access-48zx8\") pod \"controller-manager-dd876dc64-858k5\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.175604 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-proxy-ca-bundles\") pod \"controller-manager-dd876dc64-858k5\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.175654 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e421830d-fd9e-4b5c-80b9-b0415a932839-config\") pod \"route-controller-manager-86c5448d5f-cqtjm\" (UID: \"e421830d-fd9e-4b5c-80b9-b0415a932839\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.175675 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e421830d-fd9e-4b5c-80b9-b0415a932839-serving-cert\") pod \"route-controller-manager-86c5448d5f-cqtjm\" (UID: \"e421830d-fd9e-4b5c-80b9-b0415a932839\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.175697 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e421830d-fd9e-4b5c-80b9-b0415a932839-client-ca\") pod \"route-controller-manager-86c5448d5f-cqtjm\" (UID: \"e421830d-fd9e-4b5c-80b9-b0415a932839\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.175725 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db3f5953-1756-4c27-b4e0-84f2c89480b4-serving-cert\") pod \"controller-manager-dd876dc64-858k5\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.176623 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-client-ca\") pod \"controller-manager-dd876dc64-858k5\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.177364 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-proxy-ca-bundles\") pod \"controller-manager-dd876dc64-858k5\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.177549 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-config\") pod \"controller-manager-dd876dc64-858k5\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.181971 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db3f5953-1756-4c27-b4e0-84f2c89480b4-serving-cert\") pod \"controller-manager-dd876dc64-858k5\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.195861 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48zx8\" (UniqueName: \"kubernetes.io/projected/db3f5953-1756-4c27-b4e0-84f2c89480b4-kube-api-access-48zx8\") pod \"controller-manager-dd876dc64-858k5\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.276906 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e421830d-fd9e-4b5c-80b9-b0415a932839-config\") pod \"route-controller-manager-86c5448d5f-cqtjm\" (UID: \"e421830d-fd9e-4b5c-80b9-b0415a932839\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.277269 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e421830d-fd9e-4b5c-80b9-b0415a932839-serving-cert\") pod \"route-controller-manager-86c5448d5f-cqtjm\" (UID: \"e421830d-fd9e-4b5c-80b9-b0415a932839\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.277297 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e421830d-fd9e-4b5c-80b9-b0415a932839-client-ca\") pod \"route-controller-manager-86c5448d5f-cqtjm\" (UID: \"e421830d-fd9e-4b5c-80b9-b0415a932839\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.277364 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcngw\" (UniqueName: \"kubernetes.io/projected/e421830d-fd9e-4b5c-80b9-b0415a932839-kube-api-access-lcngw\") pod \"route-controller-manager-86c5448d5f-cqtjm\" (UID: \"e421830d-fd9e-4b5c-80b9-b0415a932839\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.278136 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e421830d-fd9e-4b5c-80b9-b0415a932839-config\") pod \"route-controller-manager-86c5448d5f-cqtjm\" (UID: \"e421830d-fd9e-4b5c-80b9-b0415a932839\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.278221 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e421830d-fd9e-4b5c-80b9-b0415a932839-client-ca\") pod \"route-controller-manager-86c5448d5f-cqtjm\" (UID: \"e421830d-fd9e-4b5c-80b9-b0415a932839\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.281055 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e421830d-fd9e-4b5c-80b9-b0415a932839-serving-cert\") pod \"route-controller-manager-86c5448d5f-cqtjm\" (UID: \"e421830d-fd9e-4b5c-80b9-b0415a932839\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.294361 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcngw\" (UniqueName: \"kubernetes.io/projected/e421830d-fd9e-4b5c-80b9-b0415a932839-kube-api-access-lcngw\") pod \"route-controller-manager-86c5448d5f-cqtjm\" (UID: \"e421830d-fd9e-4b5c-80b9-b0415a932839\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.305552 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.371784 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.511245 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dcd226c-c318-412c-9483-2f359c9a1b75" path="/var/lib/kubelet/pods/8dcd226c-c318-412c-9483-2f359c9a1b75/volumes" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.512023 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e308a994-46ac-4ad4-9b44-f4e0cc910c81" path="/var/lib/kubelet/pods/e308a994-46ac-4ad4-9b44-f4e0cc910c81/volumes" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.609026 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm"] Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.614868 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dd876dc64-858k5"] Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.681917 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm"] Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.725890 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dd876dc64-858k5"] Jan 28 09:48:11 crc kubenswrapper[4735]: W0128 09:48:11.735987 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb3f5953_1756_4c27_b4e0_84f2c89480b4.slice/crio-0bde1a409d63767a4c538fc7b140807b1dc68f46d8f5e579c255f2c5502079cb WatchSource:0}: Error finding container 0bde1a409d63767a4c538fc7b140807b1dc68f46d8f5e579c255f2c5502079cb: Status 404 returned error can't find the container with id 0bde1a409d63767a4c538fc7b140807b1dc68f46d8f5e579c255f2c5502079cb Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.822544 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" event={"ID":"e421830d-fd9e-4b5c-80b9-b0415a932839","Type":"ContainerStarted","Data":"96abdd6681e9a30608252f1271cb3da16b8ade32d9a6b81bba81365e0ed0471d"} Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.822600 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" event={"ID":"e421830d-fd9e-4b5c-80b9-b0415a932839","Type":"ContainerStarted","Data":"e807d83ee67497b9524b7bc4588e371ff205255adbe3cf1a71de5b1ac58d9241"} Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.822630 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" podUID="e421830d-fd9e-4b5c-80b9-b0415a932839" containerName="route-controller-manager" containerID="cri-o://96abdd6681e9a30608252f1271cb3da16b8ade32d9a6b81bba81365e0ed0471d" gracePeriod=30 Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.822952 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.824541 4735 patch_prober.go:28] interesting pod/route-controller-manager-86c5448d5f-cqtjm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.824571 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" podUID="e421830d-fd9e-4b5c-80b9-b0415a932839" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.826246 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" event={"ID":"db3f5953-1756-4c27-b4e0-84f2c89480b4","Type":"ContainerStarted","Data":"0bde1a409d63767a4c538fc7b140807b1dc68f46d8f5e579c255f2c5502079cb"} Jan 28 09:48:11 crc kubenswrapper[4735]: I0128 09:48:11.842612 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" podStartSLOduration=0.842594688 podStartE2EDuration="842.594688ms" podCreationTimestamp="2026-01-28 09:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:48:11.840284925 +0000 UTC m=+344.989949308" watchObservedRunningTime="2026-01-28 09:48:11.842594688 +0000 UTC m=+344.992259061" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.188327 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-86c5448d5f-cqtjm_e421830d-fd9e-4b5c-80b9-b0415a932839/route-controller-manager/0.log" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.188703 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.288879 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e421830d-fd9e-4b5c-80b9-b0415a932839-config\") pod \"e421830d-fd9e-4b5c-80b9-b0415a932839\" (UID: \"e421830d-fd9e-4b5c-80b9-b0415a932839\") " Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.288939 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e421830d-fd9e-4b5c-80b9-b0415a932839-client-ca\") pod \"e421830d-fd9e-4b5c-80b9-b0415a932839\" (UID: \"e421830d-fd9e-4b5c-80b9-b0415a932839\") " Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.288987 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcngw\" (UniqueName: \"kubernetes.io/projected/e421830d-fd9e-4b5c-80b9-b0415a932839-kube-api-access-lcngw\") pod \"e421830d-fd9e-4b5c-80b9-b0415a932839\" (UID: \"e421830d-fd9e-4b5c-80b9-b0415a932839\") " Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.289037 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e421830d-fd9e-4b5c-80b9-b0415a932839-serving-cert\") pod \"e421830d-fd9e-4b5c-80b9-b0415a932839\" (UID: \"e421830d-fd9e-4b5c-80b9-b0415a932839\") " Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.289899 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e421830d-fd9e-4b5c-80b9-b0415a932839-client-ca" (OuterVolumeSpecName: "client-ca") pod "e421830d-fd9e-4b5c-80b9-b0415a932839" (UID: "e421830d-fd9e-4b5c-80b9-b0415a932839"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.290192 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e421830d-fd9e-4b5c-80b9-b0415a932839-config" (OuterVolumeSpecName: "config") pod "e421830d-fd9e-4b5c-80b9-b0415a932839" (UID: "e421830d-fd9e-4b5c-80b9-b0415a932839"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.300110 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e421830d-fd9e-4b5c-80b9-b0415a932839-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e421830d-fd9e-4b5c-80b9-b0415a932839" (UID: "e421830d-fd9e-4b5c-80b9-b0415a932839"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.306944 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e421830d-fd9e-4b5c-80b9-b0415a932839-kube-api-access-lcngw" (OuterVolumeSpecName: "kube-api-access-lcngw") pod "e421830d-fd9e-4b5c-80b9-b0415a932839" (UID: "e421830d-fd9e-4b5c-80b9-b0415a932839"). InnerVolumeSpecName "kube-api-access-lcngw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.390114 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e421830d-fd9e-4b5c-80b9-b0415a932839-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.390157 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e421830d-fd9e-4b5c-80b9-b0415a932839-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.390171 4735 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e421830d-fd9e-4b5c-80b9-b0415a932839-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.390185 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcngw\" (UniqueName: \"kubernetes.io/projected/e421830d-fd9e-4b5c-80b9-b0415a932839-kube-api-access-lcngw\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.835357 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" event={"ID":"db3f5953-1756-4c27-b4e0-84f2c89480b4","Type":"ContainerStarted","Data":"bc49161a233ecd557d946cb8f1bcae7d8ee825916bcd3e4e06ba78eb5ac97111"} Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.835450 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" podUID="db3f5953-1756-4c27-b4e0-84f2c89480b4" containerName="controller-manager" containerID="cri-o://bc49161a233ecd557d946cb8f1bcae7d8ee825916bcd3e4e06ba78eb5ac97111" gracePeriod=30 Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.835576 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.839327 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-86c5448d5f-cqtjm_e421830d-fd9e-4b5c-80b9-b0415a932839/route-controller-manager/0.log" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.839377 4735 generic.go:334] "Generic (PLEG): container finished" podID="e421830d-fd9e-4b5c-80b9-b0415a932839" containerID="96abdd6681e9a30608252f1271cb3da16b8ade32d9a6b81bba81365e0ed0471d" exitCode=2 Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.839455 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" event={"ID":"e421830d-fd9e-4b5c-80b9-b0415a932839","Type":"ContainerDied","Data":"96abdd6681e9a30608252f1271cb3da16b8ade32d9a6b81bba81365e0ed0471d"} Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.839487 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.839508 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm" event={"ID":"e421830d-fd9e-4b5c-80b9-b0415a932839","Type":"ContainerDied","Data":"e807d83ee67497b9524b7bc4588e371ff205255adbe3cf1a71de5b1ac58d9241"} Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.839529 4735 scope.go:117] "RemoveContainer" containerID="96abdd6681e9a30608252f1271cb3da16b8ade32d9a6b81bba81365e0ed0471d" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.841071 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.858900 4735 scope.go:117] "RemoveContainer" containerID="96abdd6681e9a30608252f1271cb3da16b8ade32d9a6b81bba81365e0ed0471d" Jan 28 09:48:12 crc kubenswrapper[4735]: E0128 09:48:12.859652 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96abdd6681e9a30608252f1271cb3da16b8ade32d9a6b81bba81365e0ed0471d\": container with ID starting with 96abdd6681e9a30608252f1271cb3da16b8ade32d9a6b81bba81365e0ed0471d not found: ID does not exist" containerID="96abdd6681e9a30608252f1271cb3da16b8ade32d9a6b81bba81365e0ed0471d" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.859702 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96abdd6681e9a30608252f1271cb3da16b8ade32d9a6b81bba81365e0ed0471d"} err="failed to get container status \"96abdd6681e9a30608252f1271cb3da16b8ade32d9a6b81bba81365e0ed0471d\": rpc error: code = NotFound desc = could not find container \"96abdd6681e9a30608252f1271cb3da16b8ade32d9a6b81bba81365e0ed0471d\": container with ID starting with 96abdd6681e9a30608252f1271cb3da16b8ade32d9a6b81bba81365e0ed0471d not found: ID does not exist" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.880803 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" podStartSLOduration=2.880781633 podStartE2EDuration="2.880781633s" podCreationTimestamp="2026-01-28 09:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:48:12.85637383 +0000 UTC m=+346.006038203" watchObservedRunningTime="2026-01-28 09:48:12.880781633 +0000 UTC m=+346.030446026" Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.894236 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm"] Jan 28 09:48:12 crc kubenswrapper[4735]: I0128 09:48:12.897081 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c5448d5f-cqtjm"] Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.168514 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.200410 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-client-ca\") pod \"db3f5953-1756-4c27-b4e0-84f2c89480b4\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.200462 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db3f5953-1756-4c27-b4e0-84f2c89480b4-serving-cert\") pod \"db3f5953-1756-4c27-b4e0-84f2c89480b4\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.200509 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48zx8\" (UniqueName: \"kubernetes.io/projected/db3f5953-1756-4c27-b4e0-84f2c89480b4-kube-api-access-48zx8\") pod \"db3f5953-1756-4c27-b4e0-84f2c89480b4\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.200565 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-proxy-ca-bundles\") pod \"db3f5953-1756-4c27-b4e0-84f2c89480b4\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.200600 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-config\") pod \"db3f5953-1756-4c27-b4e0-84f2c89480b4\" (UID: \"db3f5953-1756-4c27-b4e0-84f2c89480b4\") " Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.201936 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-config" (OuterVolumeSpecName: "config") pod "db3f5953-1756-4c27-b4e0-84f2c89480b4" (UID: "db3f5953-1756-4c27-b4e0-84f2c89480b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.203602 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-client-ca" (OuterVolumeSpecName: "client-ca") pod "db3f5953-1756-4c27-b4e0-84f2c89480b4" (UID: "db3f5953-1756-4c27-b4e0-84f2c89480b4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.203952 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "db3f5953-1756-4c27-b4e0-84f2c89480b4" (UID: "db3f5953-1756-4c27-b4e0-84f2c89480b4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.206017 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db3f5953-1756-4c27-b4e0-84f2c89480b4-kube-api-access-48zx8" (OuterVolumeSpecName: "kube-api-access-48zx8") pod "db3f5953-1756-4c27-b4e0-84f2c89480b4" (UID: "db3f5953-1756-4c27-b4e0-84f2c89480b4"). InnerVolumeSpecName "kube-api-access-48zx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.206556 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3f5953-1756-4c27-b4e0-84f2c89480b4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "db3f5953-1756-4c27-b4e0-84f2c89480b4" (UID: "db3f5953-1756-4c27-b4e0-84f2c89480b4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.302537 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.302586 4735 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.302598 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db3f5953-1756-4c27-b4e0-84f2c89480b4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.302611 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48zx8\" (UniqueName: \"kubernetes.io/projected/db3f5953-1756-4c27-b4e0-84f2c89480b4-kube-api-access-48zx8\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.302624 4735 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db3f5953-1756-4c27-b4e0-84f2c89480b4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.516521 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e421830d-fd9e-4b5c-80b9-b0415a932839" path="/var/lib/kubelet/pods/e421830d-fd9e-4b5c-80b9-b0415a932839/volumes" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.638316 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl"] Jan 28 09:48:13 crc kubenswrapper[4735]: E0128 09:48:13.638650 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e421830d-fd9e-4b5c-80b9-b0415a932839" containerName="route-controller-manager" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.638675 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e421830d-fd9e-4b5c-80b9-b0415a932839" containerName="route-controller-manager" Jan 28 09:48:13 crc kubenswrapper[4735]: E0128 09:48:13.638686 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db3f5953-1756-4c27-b4e0-84f2c89480b4" containerName="controller-manager" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.638693 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="db3f5953-1756-4c27-b4e0-84f2c89480b4" containerName="controller-manager" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.638820 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="db3f5953-1756-4c27-b4e0-84f2c89480b4" containerName="controller-manager" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.638839 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e421830d-fd9e-4b5c-80b9-b0415a932839" containerName="route-controller-manager" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.639337 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.643157 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.643460 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.643712 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.644128 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.644297 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.644458 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.653612 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl"] Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.707375 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f31aafd4-4622-4664-a685-90279adcc270-client-ca\") pod \"route-controller-manager-66b4db8ff5-cdzfl\" (UID: \"f31aafd4-4622-4664-a685-90279adcc270\") " pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.707437 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f31aafd4-4622-4664-a685-90279adcc270-serving-cert\") pod \"route-controller-manager-66b4db8ff5-cdzfl\" (UID: \"f31aafd4-4622-4664-a685-90279adcc270\") " pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.707590 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27rhl\" (UniqueName: \"kubernetes.io/projected/f31aafd4-4622-4664-a685-90279adcc270-kube-api-access-27rhl\") pod \"route-controller-manager-66b4db8ff5-cdzfl\" (UID: \"f31aafd4-4622-4664-a685-90279adcc270\") " pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.707718 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f31aafd4-4622-4664-a685-90279adcc270-config\") pod \"route-controller-manager-66b4db8ff5-cdzfl\" (UID: \"f31aafd4-4622-4664-a685-90279adcc270\") " pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.809788 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27rhl\" (UniqueName: \"kubernetes.io/projected/f31aafd4-4622-4664-a685-90279adcc270-kube-api-access-27rhl\") pod \"route-controller-manager-66b4db8ff5-cdzfl\" (UID: \"f31aafd4-4622-4664-a685-90279adcc270\") " pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.809931 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f31aafd4-4622-4664-a685-90279adcc270-config\") pod \"route-controller-manager-66b4db8ff5-cdzfl\" (UID: \"f31aafd4-4622-4664-a685-90279adcc270\") " pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.809990 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f31aafd4-4622-4664-a685-90279adcc270-client-ca\") pod \"route-controller-manager-66b4db8ff5-cdzfl\" (UID: \"f31aafd4-4622-4664-a685-90279adcc270\") " pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.810017 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f31aafd4-4622-4664-a685-90279adcc270-serving-cert\") pod \"route-controller-manager-66b4db8ff5-cdzfl\" (UID: \"f31aafd4-4622-4664-a685-90279adcc270\") " pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.813035 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f31aafd4-4622-4664-a685-90279adcc270-client-ca\") pod \"route-controller-manager-66b4db8ff5-cdzfl\" (UID: \"f31aafd4-4622-4664-a685-90279adcc270\") " pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.814618 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f31aafd4-4622-4664-a685-90279adcc270-config\") pod \"route-controller-manager-66b4db8ff5-cdzfl\" (UID: \"f31aafd4-4622-4664-a685-90279adcc270\") " pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.817484 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f31aafd4-4622-4664-a685-90279adcc270-serving-cert\") pod \"route-controller-manager-66b4db8ff5-cdzfl\" (UID: \"f31aafd4-4622-4664-a685-90279adcc270\") " pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.829549 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27rhl\" (UniqueName: \"kubernetes.io/projected/f31aafd4-4622-4664-a685-90279adcc270-kube-api-access-27rhl\") pod \"route-controller-manager-66b4db8ff5-cdzfl\" (UID: \"f31aafd4-4622-4664-a685-90279adcc270\") " pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.849290 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.849330 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" event={"ID":"db3f5953-1756-4c27-b4e0-84f2c89480b4","Type":"ContainerDied","Data":"bc49161a233ecd557d946cb8f1bcae7d8ee825916bcd3e4e06ba78eb5ac97111"} Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.849400 4735 scope.go:117] "RemoveContainer" containerID="bc49161a233ecd557d946cb8f1bcae7d8ee825916bcd3e4e06ba78eb5ac97111" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.849145 4735 generic.go:334] "Generic (PLEG): container finished" podID="db3f5953-1756-4c27-b4e0-84f2c89480b4" containerID="bc49161a233ecd557d946cb8f1bcae7d8ee825916bcd3e4e06ba78eb5ac97111" exitCode=0 Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.849494 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dd876dc64-858k5" event={"ID":"db3f5953-1756-4c27-b4e0-84f2c89480b4","Type":"ContainerDied","Data":"0bde1a409d63767a4c538fc7b140807b1dc68f46d8f5e579c255f2c5502079cb"} Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.871401 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dd876dc64-858k5"] Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.873410 4735 scope.go:117] "RemoveContainer" containerID="bc49161a233ecd557d946cb8f1bcae7d8ee825916bcd3e4e06ba78eb5ac97111" Jan 28 09:48:13 crc kubenswrapper[4735]: E0128 09:48:13.874073 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc49161a233ecd557d946cb8f1bcae7d8ee825916bcd3e4e06ba78eb5ac97111\": container with ID starting with bc49161a233ecd557d946cb8f1bcae7d8ee825916bcd3e4e06ba78eb5ac97111 not found: ID does not exist" containerID="bc49161a233ecd557d946cb8f1bcae7d8ee825916bcd3e4e06ba78eb5ac97111" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.874143 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc49161a233ecd557d946cb8f1bcae7d8ee825916bcd3e4e06ba78eb5ac97111"} err="failed to get container status \"bc49161a233ecd557d946cb8f1bcae7d8ee825916bcd3e4e06ba78eb5ac97111\": rpc error: code = NotFound desc = could not find container \"bc49161a233ecd557d946cb8f1bcae7d8ee825916bcd3e4e06ba78eb5ac97111\": container with ID starting with bc49161a233ecd557d946cb8f1bcae7d8ee825916bcd3e4e06ba78eb5ac97111 not found: ID does not exist" Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.879596 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-dd876dc64-858k5"] Jan 28 09:48:13 crc kubenswrapper[4735]: I0128 09:48:13.987154 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:14 crc kubenswrapper[4735]: I0128 09:48:14.183936 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl"] Jan 28 09:48:14 crc kubenswrapper[4735]: I0128 09:48:14.856627 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" event={"ID":"f31aafd4-4622-4664-a685-90279adcc270","Type":"ContainerStarted","Data":"f8f163e3ded8cc1dc01896f0ca2bacd8f50fd078bb0a4c889d40b1cfcc1d3b9b"} Jan 28 09:48:14 crc kubenswrapper[4735]: I0128 09:48:14.856672 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" event={"ID":"f31aafd4-4622-4664-a685-90279adcc270","Type":"ContainerStarted","Data":"fe0206b8cbef2e9899857d620d0da4833fad2a78f427d6c6f94743eed6a2f55c"} Jan 28 09:48:14 crc kubenswrapper[4735]: I0128 09:48:14.856894 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:14 crc kubenswrapper[4735]: I0128 09:48:14.861208 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:14 crc kubenswrapper[4735]: I0128 09:48:14.891071 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" podStartSLOduration=3.891053722 podStartE2EDuration="3.891053722s" podCreationTimestamp="2026-01-28 09:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:48:14.871543184 +0000 UTC m=+348.021207557" watchObservedRunningTime="2026-01-28 09:48:14.891053722 +0000 UTC m=+348.040718095" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.512496 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db3f5953-1756-4c27-b4e0-84f2c89480b4" path="/var/lib/kubelet/pods/db3f5953-1756-4c27-b4e0-84f2c89480b4/volumes" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.641315 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6796c7fd86-pth74"] Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.642414 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.646733 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.646928 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.647190 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.648132 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.649065 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.649247 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.657008 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.660884 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6796c7fd86-pth74"] Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.752444 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-proxy-ca-bundles\") pod \"controller-manager-6796c7fd86-pth74\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.752522 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-config\") pod \"controller-manager-6796c7fd86-pth74\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.752569 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nxr4\" (UniqueName: \"kubernetes.io/projected/a739ca2f-4798-459c-8d29-368652be890e-kube-api-access-2nxr4\") pod \"controller-manager-6796c7fd86-pth74\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.752636 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a739ca2f-4798-459c-8d29-368652be890e-serving-cert\") pod \"controller-manager-6796c7fd86-pth74\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.752702 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-client-ca\") pod \"controller-manager-6796c7fd86-pth74\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.853709 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-client-ca\") pod \"controller-manager-6796c7fd86-pth74\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.853899 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-proxy-ca-bundles\") pod \"controller-manager-6796c7fd86-pth74\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.853977 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-config\") pod \"controller-manager-6796c7fd86-pth74\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.854045 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nxr4\" (UniqueName: \"kubernetes.io/projected/a739ca2f-4798-459c-8d29-368652be890e-kube-api-access-2nxr4\") pod \"controller-manager-6796c7fd86-pth74\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.854141 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a739ca2f-4798-459c-8d29-368652be890e-serving-cert\") pod \"controller-manager-6796c7fd86-pth74\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.855893 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-client-ca\") pod \"controller-manager-6796c7fd86-pth74\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.856384 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-config\") pod \"controller-manager-6796c7fd86-pth74\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.856389 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-proxy-ca-bundles\") pod \"controller-manager-6796c7fd86-pth74\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.861218 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a739ca2f-4798-459c-8d29-368652be890e-serving-cert\") pod \"controller-manager-6796c7fd86-pth74\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.890327 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nxr4\" (UniqueName: \"kubernetes.io/projected/a739ca2f-4798-459c-8d29-368652be890e-kube-api-access-2nxr4\") pod \"controller-manager-6796c7fd86-pth74\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:15 crc kubenswrapper[4735]: I0128 09:48:15.959866 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:16 crc kubenswrapper[4735]: I0128 09:48:16.238885 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6796c7fd86-pth74"] Jan 28 09:48:16 crc kubenswrapper[4735]: W0128 09:48:16.243500 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda739ca2f_4798_459c_8d29_368652be890e.slice/crio-ca101645134fb1113d213bf1f14823dea7a0f2613f5092a752ba4173bcd866c3 WatchSource:0}: Error finding container ca101645134fb1113d213bf1f14823dea7a0f2613f5092a752ba4173bcd866c3: Status 404 returned error can't find the container with id ca101645134fb1113d213bf1f14823dea7a0f2613f5092a752ba4173bcd866c3 Jan 28 09:48:16 crc kubenswrapper[4735]: I0128 09:48:16.879353 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" event={"ID":"a739ca2f-4798-459c-8d29-368652be890e","Type":"ContainerStarted","Data":"5d49ec6c31184240aeb3b2d008a0a2e9c79df7431ba08cc51d72d985613285f5"} Jan 28 09:48:16 crc kubenswrapper[4735]: I0128 09:48:16.879942 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" event={"ID":"a739ca2f-4798-459c-8d29-368652be890e","Type":"ContainerStarted","Data":"ca101645134fb1113d213bf1f14823dea7a0f2613f5092a752ba4173bcd866c3"} Jan 28 09:48:16 crc kubenswrapper[4735]: I0128 09:48:16.899575 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" podStartSLOduration=5.899559541 podStartE2EDuration="5.899559541s" podCreationTimestamp="2026-01-28 09:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:48:16.898392609 +0000 UTC m=+350.048056992" watchObservedRunningTime="2026-01-28 09:48:16.899559541 +0000 UTC m=+350.049223914" Jan 28 09:48:17 crc kubenswrapper[4735]: I0128 09:48:17.884217 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:17 crc kubenswrapper[4735]: I0128 09:48:17.889421 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:29 crc kubenswrapper[4735]: I0128 09:48:29.842375 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl"] Jan 28 09:48:29 crc kubenswrapper[4735]: I0128 09:48:29.843070 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" podUID="f31aafd4-4622-4664-a685-90279adcc270" containerName="route-controller-manager" containerID="cri-o://f8f163e3ded8cc1dc01896f0ca2bacd8f50fd078bb0a4c889d40b1cfcc1d3b9b" gracePeriod=30 Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.296482 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.360979 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f31aafd4-4622-4664-a685-90279adcc270-config\") pod \"f31aafd4-4622-4664-a685-90279adcc270\" (UID: \"f31aafd4-4622-4664-a685-90279adcc270\") " Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.361130 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f31aafd4-4622-4664-a685-90279adcc270-client-ca\") pod \"f31aafd4-4622-4664-a685-90279adcc270\" (UID: \"f31aafd4-4622-4664-a685-90279adcc270\") " Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.361186 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f31aafd4-4622-4664-a685-90279adcc270-serving-cert\") pod \"f31aafd4-4622-4664-a685-90279adcc270\" (UID: \"f31aafd4-4622-4664-a685-90279adcc270\") " Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.361250 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27rhl\" (UniqueName: \"kubernetes.io/projected/f31aafd4-4622-4664-a685-90279adcc270-kube-api-access-27rhl\") pod \"f31aafd4-4622-4664-a685-90279adcc270\" (UID: \"f31aafd4-4622-4664-a685-90279adcc270\") " Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.362281 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f31aafd4-4622-4664-a685-90279adcc270-client-ca" (OuterVolumeSpecName: "client-ca") pod "f31aafd4-4622-4664-a685-90279adcc270" (UID: "f31aafd4-4622-4664-a685-90279adcc270"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.362562 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f31aafd4-4622-4664-a685-90279adcc270-config" (OuterVolumeSpecName: "config") pod "f31aafd4-4622-4664-a685-90279adcc270" (UID: "f31aafd4-4622-4664-a685-90279adcc270"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.366981 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31aafd4-4622-4664-a685-90279adcc270-kube-api-access-27rhl" (OuterVolumeSpecName: "kube-api-access-27rhl") pod "f31aafd4-4622-4664-a685-90279adcc270" (UID: "f31aafd4-4622-4664-a685-90279adcc270"). InnerVolumeSpecName "kube-api-access-27rhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.370984 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f31aafd4-4622-4664-a685-90279adcc270-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f31aafd4-4622-4664-a685-90279adcc270" (UID: "f31aafd4-4622-4664-a685-90279adcc270"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.462541 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f31aafd4-4622-4664-a685-90279adcc270-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.462576 4735 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f31aafd4-4622-4664-a685-90279adcc270-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.462590 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f31aafd4-4622-4664-a685-90279adcc270-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.462603 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27rhl\" (UniqueName: \"kubernetes.io/projected/f31aafd4-4622-4664-a685-90279adcc270-kube-api-access-27rhl\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.970992 4735 generic.go:334] "Generic (PLEG): container finished" podID="f31aafd4-4622-4664-a685-90279adcc270" containerID="f8f163e3ded8cc1dc01896f0ca2bacd8f50fd078bb0a4c889d40b1cfcc1d3b9b" exitCode=0 Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.971076 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.971102 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" event={"ID":"f31aafd4-4622-4664-a685-90279adcc270","Type":"ContainerDied","Data":"f8f163e3ded8cc1dc01896f0ca2bacd8f50fd078bb0a4c889d40b1cfcc1d3b9b"} Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.972005 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl" event={"ID":"f31aafd4-4622-4664-a685-90279adcc270","Type":"ContainerDied","Data":"fe0206b8cbef2e9899857d620d0da4833fad2a78f427d6c6f94743eed6a2f55c"} Jan 28 09:48:30 crc kubenswrapper[4735]: I0128 09:48:30.972037 4735 scope.go:117] "RemoveContainer" containerID="f8f163e3ded8cc1dc01896f0ca2bacd8f50fd078bb0a4c889d40b1cfcc1d3b9b" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.003898 4735 scope.go:117] "RemoveContainer" containerID="f8f163e3ded8cc1dc01896f0ca2bacd8f50fd078bb0a4c889d40b1cfcc1d3b9b" Jan 28 09:48:31 crc kubenswrapper[4735]: E0128 09:48:31.004558 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8f163e3ded8cc1dc01896f0ca2bacd8f50fd078bb0a4c889d40b1cfcc1d3b9b\": container with ID starting with f8f163e3ded8cc1dc01896f0ca2bacd8f50fd078bb0a4c889d40b1cfcc1d3b9b not found: ID does not exist" containerID="f8f163e3ded8cc1dc01896f0ca2bacd8f50fd078bb0a4c889d40b1cfcc1d3b9b" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.004621 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8f163e3ded8cc1dc01896f0ca2bacd8f50fd078bb0a4c889d40b1cfcc1d3b9b"} err="failed to get container status \"f8f163e3ded8cc1dc01896f0ca2bacd8f50fd078bb0a4c889d40b1cfcc1d3b9b\": rpc error: code = NotFound desc = could not find container \"f8f163e3ded8cc1dc01896f0ca2bacd8f50fd078bb0a4c889d40b1cfcc1d3b9b\": container with ID starting with f8f163e3ded8cc1dc01896f0ca2bacd8f50fd078bb0a4c889d40b1cfcc1d3b9b not found: ID does not exist" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.006949 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl"] Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.018475 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66b4db8ff5-cdzfl"] Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.508607 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f31aafd4-4622-4664-a685-90279adcc270" path="/var/lib/kubelet/pods/f31aafd4-4622-4664-a685-90279adcc270/volumes" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.648945 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw"] Jan 28 09:48:31 crc kubenswrapper[4735]: E0128 09:48:31.649124 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31aafd4-4622-4664-a685-90279adcc270" containerName="route-controller-manager" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.649136 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31aafd4-4622-4664-a685-90279adcc270" containerName="route-controller-manager" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.649228 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31aafd4-4622-4664-a685-90279adcc270" containerName="route-controller-manager" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.649545 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.652404 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.653486 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.653867 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.656703 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.657324 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.657788 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.680082 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw"] Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.777703 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3fbb373-faec-4226-a23b-01067941becf-client-ca\") pod \"route-controller-manager-86c5448d5f-62dfw\" (UID: \"c3fbb373-faec-4226-a23b-01067941becf\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.777760 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq2r4\" (UniqueName: \"kubernetes.io/projected/c3fbb373-faec-4226-a23b-01067941becf-kube-api-access-lq2r4\") pod \"route-controller-manager-86c5448d5f-62dfw\" (UID: \"c3fbb373-faec-4226-a23b-01067941becf\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.777848 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fbb373-faec-4226-a23b-01067941becf-serving-cert\") pod \"route-controller-manager-86c5448d5f-62dfw\" (UID: \"c3fbb373-faec-4226-a23b-01067941becf\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.777908 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3fbb373-faec-4226-a23b-01067941becf-config\") pod \"route-controller-manager-86c5448d5f-62dfw\" (UID: \"c3fbb373-faec-4226-a23b-01067941becf\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.878508 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3fbb373-faec-4226-a23b-01067941becf-client-ca\") pod \"route-controller-manager-86c5448d5f-62dfw\" (UID: \"c3fbb373-faec-4226-a23b-01067941becf\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.878567 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq2r4\" (UniqueName: \"kubernetes.io/projected/c3fbb373-faec-4226-a23b-01067941becf-kube-api-access-lq2r4\") pod \"route-controller-manager-86c5448d5f-62dfw\" (UID: \"c3fbb373-faec-4226-a23b-01067941becf\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.878599 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fbb373-faec-4226-a23b-01067941becf-serving-cert\") pod \"route-controller-manager-86c5448d5f-62dfw\" (UID: \"c3fbb373-faec-4226-a23b-01067941becf\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.878625 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3fbb373-faec-4226-a23b-01067941becf-config\") pod \"route-controller-manager-86c5448d5f-62dfw\" (UID: \"c3fbb373-faec-4226-a23b-01067941becf\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.879689 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3fbb373-faec-4226-a23b-01067941becf-config\") pod \"route-controller-manager-86c5448d5f-62dfw\" (UID: \"c3fbb373-faec-4226-a23b-01067941becf\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.880298 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3fbb373-faec-4226-a23b-01067941becf-client-ca\") pod \"route-controller-manager-86c5448d5f-62dfw\" (UID: \"c3fbb373-faec-4226-a23b-01067941becf\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.884646 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fbb373-faec-4226-a23b-01067941becf-serving-cert\") pod \"route-controller-manager-86c5448d5f-62dfw\" (UID: \"c3fbb373-faec-4226-a23b-01067941becf\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.899322 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq2r4\" (UniqueName: \"kubernetes.io/projected/c3fbb373-faec-4226-a23b-01067941becf-kube-api-access-lq2r4\") pod \"route-controller-manager-86c5448d5f-62dfw\" (UID: \"c3fbb373-faec-4226-a23b-01067941becf\") " pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" Jan 28 09:48:31 crc kubenswrapper[4735]: I0128 09:48:31.978124 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.226318 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nhtt4"] Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.227176 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.238226 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nhtt4"] Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.283013 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-registry-tls\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.283359 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.283387 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-bound-sa-token\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.283403 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.283422 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b67qv\" (UniqueName: \"kubernetes.io/projected/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-kube-api-access-b67qv\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.283454 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-trusted-ca\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.283491 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-registry-certificates\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.283591 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.307932 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.351642 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw"] Jan 28 09:48:32 crc kubenswrapper[4735]: W0128 09:48:32.356269 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3fbb373_faec_4226_a23b_01067941becf.slice/crio-35ef56357aef7ffcf03177e2f62a98e7e3a259c6410dd8be2a97e2093b55c965 WatchSource:0}: Error finding container 35ef56357aef7ffcf03177e2f62a98e7e3a259c6410dd8be2a97e2093b55c965: Status 404 returned error can't find the container with id 35ef56357aef7ffcf03177e2f62a98e7e3a259c6410dd8be2a97e2093b55c965 Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.384735 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-trusted-ca\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.384820 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-registry-certificates\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.384847 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.384886 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-registry-tls\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.384935 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-bound-sa-token\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.384955 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.384978 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b67qv\" (UniqueName: \"kubernetes.io/projected/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-kube-api-access-b67qv\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.386222 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-trusted-ca\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.386477 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.387205 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-registry-certificates\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.390731 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-registry-tls\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.394088 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.415671 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b67qv\" (UniqueName: \"kubernetes.io/projected/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-kube-api-access-b67qv\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.422189 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9-bound-sa-token\") pod \"image-registry-66df7c8f76-nhtt4\" (UID: \"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.545154 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.972447 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nhtt4"] Jan 28 09:48:32 crc kubenswrapper[4735]: W0128 09:48:32.983136 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96be2aa4_c295_4b2d_a1c5_fcd954d0cbd9.slice/crio-e356c02e58ae4b05ced9f2009dc44be097dbc651daf6f555638863fcc6bd8071 WatchSource:0}: Error finding container e356c02e58ae4b05ced9f2009dc44be097dbc651daf6f555638863fcc6bd8071: Status 404 returned error can't find the container with id e356c02e58ae4b05ced9f2009dc44be097dbc651daf6f555638863fcc6bd8071 Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.989179 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" event={"ID":"c3fbb373-faec-4226-a23b-01067941becf","Type":"ContainerStarted","Data":"da66ce9ae7d123bdbeee83d8d9b7e60c8268c57db6c15b3bb67bcf608276f2f7"} Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.989213 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" event={"ID":"c3fbb373-faec-4226-a23b-01067941becf","Type":"ContainerStarted","Data":"35ef56357aef7ffcf03177e2f62a98e7e3a259c6410dd8be2a97e2093b55c965"} Jan 28 09:48:32 crc kubenswrapper[4735]: I0128 09:48:32.990121 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" Jan 28 09:48:33 crc kubenswrapper[4735]: I0128 09:48:33.012141 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" podStartSLOduration=4.012122026 podStartE2EDuration="4.012122026s" podCreationTimestamp="2026-01-28 09:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:48:33.007443967 +0000 UTC m=+366.157108340" watchObservedRunningTime="2026-01-28 09:48:33.012122026 +0000 UTC m=+366.161786399" Jan 28 09:48:33 crc kubenswrapper[4735]: I0128 09:48:33.268669 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86c5448d5f-62dfw" Jan 28 09:48:33 crc kubenswrapper[4735]: I0128 09:48:33.999121 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" event={"ID":"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9","Type":"ContainerStarted","Data":"ae86491ffa92f49b01d1b995ec025280efcddb61ec29022a8b80ab1b6a921459"} Jan 28 09:48:33 crc kubenswrapper[4735]: I0128 09:48:33.999600 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:34 crc kubenswrapper[4735]: I0128 09:48:33.999631 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" event={"ID":"96be2aa4-c295-4b2d-a1c5-fcd954d0cbd9","Type":"ContainerStarted","Data":"e356c02e58ae4b05ced9f2009dc44be097dbc651daf6f555638863fcc6bd8071"} Jan 28 09:48:34 crc kubenswrapper[4735]: I0128 09:48:34.022923 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" podStartSLOduration=2.022889485 podStartE2EDuration="2.022889485s" podCreationTimestamp="2026-01-28 09:48:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:48:34.022673869 +0000 UTC m=+367.172338252" watchObservedRunningTime="2026-01-28 09:48:34.022889485 +0000 UTC m=+367.172553928" Jan 28 09:48:35 crc kubenswrapper[4735]: I0128 09:48:35.429648 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 09:48:35 crc kubenswrapper[4735]: I0128 09:48:35.429714 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 09:48:49 crc kubenswrapper[4735]: I0128 09:48:49.855318 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6796c7fd86-pth74"] Jan 28 09:48:49 crc kubenswrapper[4735]: I0128 09:48:49.859493 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" podUID="a739ca2f-4798-459c-8d29-368652be890e" containerName="controller-manager" containerID="cri-o://5d49ec6c31184240aeb3b2d008a0a2e9c79df7431ba08cc51d72d985613285f5" gracePeriod=30 Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.089906 4735 generic.go:334] "Generic (PLEG): container finished" podID="a739ca2f-4798-459c-8d29-368652be890e" containerID="5d49ec6c31184240aeb3b2d008a0a2e9c79df7431ba08cc51d72d985613285f5" exitCode=0 Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.090004 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" event={"ID":"a739ca2f-4798-459c-8d29-368652be890e","Type":"ContainerDied","Data":"5d49ec6c31184240aeb3b2d008a0a2e9c79df7431ba08cc51d72d985613285f5"} Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.327096 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.502314 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-client-ca\") pod \"a739ca2f-4798-459c-8d29-368652be890e\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.502434 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-proxy-ca-bundles\") pod \"a739ca2f-4798-459c-8d29-368652be890e\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.502533 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a739ca2f-4798-459c-8d29-368652be890e-serving-cert\") pod \"a739ca2f-4798-459c-8d29-368652be890e\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.502577 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-config\") pod \"a739ca2f-4798-459c-8d29-368652be890e\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.502620 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nxr4\" (UniqueName: \"kubernetes.io/projected/a739ca2f-4798-459c-8d29-368652be890e-kube-api-access-2nxr4\") pod \"a739ca2f-4798-459c-8d29-368652be890e\" (UID: \"a739ca2f-4798-459c-8d29-368652be890e\") " Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.503316 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a739ca2f-4798-459c-8d29-368652be890e" (UID: "a739ca2f-4798-459c-8d29-368652be890e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.503335 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-client-ca" (OuterVolumeSpecName: "client-ca") pod "a739ca2f-4798-459c-8d29-368652be890e" (UID: "a739ca2f-4798-459c-8d29-368652be890e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.503652 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-config" (OuterVolumeSpecName: "config") pod "a739ca2f-4798-459c-8d29-368652be890e" (UID: "a739ca2f-4798-459c-8d29-368652be890e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.509194 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a739ca2f-4798-459c-8d29-368652be890e-kube-api-access-2nxr4" (OuterVolumeSpecName: "kube-api-access-2nxr4") pod "a739ca2f-4798-459c-8d29-368652be890e" (UID: "a739ca2f-4798-459c-8d29-368652be890e"). InnerVolumeSpecName "kube-api-access-2nxr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.510066 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a739ca2f-4798-459c-8d29-368652be890e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a739ca2f-4798-459c-8d29-368652be890e" (UID: "a739ca2f-4798-459c-8d29-368652be890e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.605255 4735 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.605982 4735 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.606006 4735 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a739ca2f-4798-459c-8d29-368652be890e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.606016 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a739ca2f-4798-459c-8d29-368652be890e-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:50 crc kubenswrapper[4735]: I0128 09:48:50.606024 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nxr4\" (UniqueName: \"kubernetes.io/projected/a739ca2f-4798-459c-8d29-368652be890e-kube-api-access-2nxr4\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.106041 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" event={"ID":"a739ca2f-4798-459c-8d29-368652be890e","Type":"ContainerDied","Data":"ca101645134fb1113d213bf1f14823dea7a0f2613f5092a752ba4173bcd866c3"} Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.106112 4735 scope.go:117] "RemoveContainer" containerID="5d49ec6c31184240aeb3b2d008a0a2e9c79df7431ba08cc51d72d985613285f5" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.106138 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6796c7fd86-pth74" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.151464 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6796c7fd86-pth74"] Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.156152 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6796c7fd86-pth74"] Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.508713 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a739ca2f-4798-459c-8d29-368652be890e" path="/var/lib/kubelet/pods/a739ca2f-4798-459c-8d29-368652be890e/volumes" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.663230 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-dd876dc64-nrqk9"] Jan 28 09:48:51 crc kubenswrapper[4735]: E0128 09:48:51.663544 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a739ca2f-4798-459c-8d29-368652be890e" containerName="controller-manager" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.663565 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="a739ca2f-4798-459c-8d29-368652be890e" containerName="controller-manager" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.663672 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="a739ca2f-4798-459c-8d29-368652be890e" containerName="controller-manager" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.664265 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.668701 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.668722 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.668852 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.669212 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.669263 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.669212 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.674178 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.675555 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dd876dc64-nrqk9"] Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.718462 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2247aad5-3f89-4bcd-8e8c-dbd99b4ce488-proxy-ca-bundles\") pod \"controller-manager-dd876dc64-nrqk9\" (UID: \"2247aad5-3f89-4bcd-8e8c-dbd99b4ce488\") " pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.718505 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2247aad5-3f89-4bcd-8e8c-dbd99b4ce488-config\") pod \"controller-manager-dd876dc64-nrqk9\" (UID: \"2247aad5-3f89-4bcd-8e8c-dbd99b4ce488\") " pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.718650 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2247aad5-3f89-4bcd-8e8c-dbd99b4ce488-serving-cert\") pod \"controller-manager-dd876dc64-nrqk9\" (UID: \"2247aad5-3f89-4bcd-8e8c-dbd99b4ce488\") " pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.718709 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blvj6\" (UniqueName: \"kubernetes.io/projected/2247aad5-3f89-4bcd-8e8c-dbd99b4ce488-kube-api-access-blvj6\") pod \"controller-manager-dd876dc64-nrqk9\" (UID: \"2247aad5-3f89-4bcd-8e8c-dbd99b4ce488\") " pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.718766 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2247aad5-3f89-4bcd-8e8c-dbd99b4ce488-client-ca\") pod \"controller-manager-dd876dc64-nrqk9\" (UID: \"2247aad5-3f89-4bcd-8e8c-dbd99b4ce488\") " pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.819555 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2247aad5-3f89-4bcd-8e8c-dbd99b4ce488-config\") pod \"controller-manager-dd876dc64-nrqk9\" (UID: \"2247aad5-3f89-4bcd-8e8c-dbd99b4ce488\") " pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.819605 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2247aad5-3f89-4bcd-8e8c-dbd99b4ce488-proxy-ca-bundles\") pod \"controller-manager-dd876dc64-nrqk9\" (UID: \"2247aad5-3f89-4bcd-8e8c-dbd99b4ce488\") " pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.819663 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2247aad5-3f89-4bcd-8e8c-dbd99b4ce488-serving-cert\") pod \"controller-manager-dd876dc64-nrqk9\" (UID: \"2247aad5-3f89-4bcd-8e8c-dbd99b4ce488\") " pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.819686 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2247aad5-3f89-4bcd-8e8c-dbd99b4ce488-client-ca\") pod \"controller-manager-dd876dc64-nrqk9\" (UID: \"2247aad5-3f89-4bcd-8e8c-dbd99b4ce488\") " pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.819704 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blvj6\" (UniqueName: \"kubernetes.io/projected/2247aad5-3f89-4bcd-8e8c-dbd99b4ce488-kube-api-access-blvj6\") pod \"controller-manager-dd876dc64-nrqk9\" (UID: \"2247aad5-3f89-4bcd-8e8c-dbd99b4ce488\") " pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.821873 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2247aad5-3f89-4bcd-8e8c-dbd99b4ce488-client-ca\") pod \"controller-manager-dd876dc64-nrqk9\" (UID: \"2247aad5-3f89-4bcd-8e8c-dbd99b4ce488\") " pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.821913 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2247aad5-3f89-4bcd-8e8c-dbd99b4ce488-config\") pod \"controller-manager-dd876dc64-nrqk9\" (UID: \"2247aad5-3f89-4bcd-8e8c-dbd99b4ce488\") " pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.822575 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2247aad5-3f89-4bcd-8e8c-dbd99b4ce488-proxy-ca-bundles\") pod \"controller-manager-dd876dc64-nrqk9\" (UID: \"2247aad5-3f89-4bcd-8e8c-dbd99b4ce488\") " pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.833638 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2247aad5-3f89-4bcd-8e8c-dbd99b4ce488-serving-cert\") pod \"controller-manager-dd876dc64-nrqk9\" (UID: \"2247aad5-3f89-4bcd-8e8c-dbd99b4ce488\") " pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.839971 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blvj6\" (UniqueName: \"kubernetes.io/projected/2247aad5-3f89-4bcd-8e8c-dbd99b4ce488-kube-api-access-blvj6\") pod \"controller-manager-dd876dc64-nrqk9\" (UID: \"2247aad5-3f89-4bcd-8e8c-dbd99b4ce488\") " pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:51 crc kubenswrapper[4735]: I0128 09:48:51.992412 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:52 crc kubenswrapper[4735]: I0128 09:48:52.205089 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dd876dc64-nrqk9"] Jan 28 09:48:52 crc kubenswrapper[4735]: W0128 09:48:52.211585 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2247aad5_3f89_4bcd_8e8c_dbd99b4ce488.slice/crio-c2ac09305144788b29af4a28025f0381942d277e54b43eb538feb1fa431ac855 WatchSource:0}: Error finding container c2ac09305144788b29af4a28025f0381942d277e54b43eb538feb1fa431ac855: Status 404 returned error can't find the container with id c2ac09305144788b29af4a28025f0381942d277e54b43eb538feb1fa431ac855 Jan 28 09:48:52 crc kubenswrapper[4735]: I0128 09:48:52.554509 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-nhtt4" Jan 28 09:48:52 crc kubenswrapper[4735]: I0128 09:48:52.610739 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-85zqb"] Jan 28 09:48:53 crc kubenswrapper[4735]: I0128 09:48:53.122948 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" event={"ID":"2247aad5-3f89-4bcd-8e8c-dbd99b4ce488","Type":"ContainerStarted","Data":"5b0bf3d33c67b9aa18a459846c66b7e0a60c433776493ff4cb96f7314c77957b"} Jan 28 09:48:53 crc kubenswrapper[4735]: I0128 09:48:53.123407 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" event={"ID":"2247aad5-3f89-4bcd-8e8c-dbd99b4ce488","Type":"ContainerStarted","Data":"c2ac09305144788b29af4a28025f0381942d277e54b43eb538feb1fa431ac855"} Jan 28 09:48:53 crc kubenswrapper[4735]: I0128 09:48:53.123440 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:53 crc kubenswrapper[4735]: I0128 09:48:53.126912 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" Jan 28 09:48:53 crc kubenswrapper[4735]: I0128 09:48:53.145096 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-dd876dc64-nrqk9" podStartSLOduration=4.145077513 podStartE2EDuration="4.145077513s" podCreationTimestamp="2026-01-28 09:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:48:53.142726488 +0000 UTC m=+386.292390861" watchObservedRunningTime="2026-01-28 09:48:53.145077513 +0000 UTC m=+386.294741886" Jan 28 09:48:57 crc kubenswrapper[4735]: I0128 09:48:57.879113 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lcz6p"] Jan 28 09:48:57 crc kubenswrapper[4735]: I0128 09:48:57.880109 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lcz6p" podUID="b4ba59eb-a71b-467f-a035-9824e10a7ffe" containerName="registry-server" containerID="cri-o://83562bc2c0908a7791a47689f660f443cb82833b763025069e045f0cc1570632" gracePeriod=30 Jan 28 09:48:57 crc kubenswrapper[4735]: I0128 09:48:57.889338 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4z4vz"] Jan 28 09:48:57 crc kubenswrapper[4735]: I0128 09:48:57.889737 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4z4vz" podUID="707fe5c2-87d6-4636-9270-b3295925cefd" containerName="registry-server" containerID="cri-o://31910b6136de3f52b1eafa990902d962b14e3031697893264d2eac51fa437bce" gracePeriod=30 Jan 28 09:48:57 crc kubenswrapper[4735]: I0128 09:48:57.902285 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tkljg"] Jan 28 09:48:57 crc kubenswrapper[4735]: I0128 09:48:57.902532 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" podUID="d9c6bba7-8f28-4afe-baa8-f070b3f0acfa" containerName="marketplace-operator" containerID="cri-o://8a9b6b5c1f95d843ad64b4b3ec2a9b5a0c609d02b15313343324a43d262f2257" gracePeriod=30 Jan 28 09:48:57 crc kubenswrapper[4735]: I0128 09:48:57.917405 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzl24"] Jan 28 09:48:57 crc kubenswrapper[4735]: I0128 09:48:57.917850 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kzl24" podUID="b7b59344-a72d-41c4-a8af-405990047117" containerName="registry-server" containerID="cri-o://0b7cea084b689e81ee7e74cd4e5d5b790ec12189cb8e0b35e64e9c45bc78c207" gracePeriod=30 Jan 28 09:48:57 crc kubenswrapper[4735]: I0128 09:48:57.923897 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mrxf2"] Jan 28 09:48:57 crc kubenswrapper[4735]: I0128 09:48:57.925027 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mrxf2" Jan 28 09:48:57 crc kubenswrapper[4735]: I0128 09:48:57.949237 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ckmr"] Jan 28 09:48:57 crc kubenswrapper[4735]: I0128 09:48:57.949632 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6ckmr" podUID="297a2cc5-ec84-4fd0-9bf1-25ccc5062045" containerName="registry-server" containerID="cri-o://80eadcd670c298d4bb0c29e07867446849ccd009d39db6e90edf2baf3833f029" gracePeriod=30 Jan 28 09:48:57 crc kubenswrapper[4735]: I0128 09:48:57.953697 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mrxf2"] Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.004530 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgk9h\" (UniqueName: \"kubernetes.io/projected/761659b6-1552-4fd3-8e5a-10bf29a772b3-kube-api-access-lgk9h\") pod \"marketplace-operator-79b997595-mrxf2\" (UID: \"761659b6-1552-4fd3-8e5a-10bf29a772b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-mrxf2" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.004614 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/761659b6-1552-4fd3-8e5a-10bf29a772b3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mrxf2\" (UID: \"761659b6-1552-4fd3-8e5a-10bf29a772b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-mrxf2" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.004677 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/761659b6-1552-4fd3-8e5a-10bf29a772b3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mrxf2\" (UID: \"761659b6-1552-4fd3-8e5a-10bf29a772b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-mrxf2" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.106685 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/761659b6-1552-4fd3-8e5a-10bf29a772b3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mrxf2\" (UID: \"761659b6-1552-4fd3-8e5a-10bf29a772b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-mrxf2" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.107143 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgk9h\" (UniqueName: \"kubernetes.io/projected/761659b6-1552-4fd3-8e5a-10bf29a772b3-kube-api-access-lgk9h\") pod \"marketplace-operator-79b997595-mrxf2\" (UID: \"761659b6-1552-4fd3-8e5a-10bf29a772b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-mrxf2" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.107199 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/761659b6-1552-4fd3-8e5a-10bf29a772b3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mrxf2\" (UID: \"761659b6-1552-4fd3-8e5a-10bf29a772b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-mrxf2" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.109460 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/761659b6-1552-4fd3-8e5a-10bf29a772b3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mrxf2\" (UID: \"761659b6-1552-4fd3-8e5a-10bf29a772b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-mrxf2" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.122869 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/761659b6-1552-4fd3-8e5a-10bf29a772b3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mrxf2\" (UID: \"761659b6-1552-4fd3-8e5a-10bf29a772b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-mrxf2" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.129072 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgk9h\" (UniqueName: \"kubernetes.io/projected/761659b6-1552-4fd3-8e5a-10bf29a772b3-kube-api-access-lgk9h\") pod \"marketplace-operator-79b997595-mrxf2\" (UID: \"761659b6-1552-4fd3-8e5a-10bf29a772b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-mrxf2" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.154800 4735 generic.go:334] "Generic (PLEG): container finished" podID="707fe5c2-87d6-4636-9270-b3295925cefd" containerID="31910b6136de3f52b1eafa990902d962b14e3031697893264d2eac51fa437bce" exitCode=0 Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.154875 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z4vz" event={"ID":"707fe5c2-87d6-4636-9270-b3295925cefd","Type":"ContainerDied","Data":"31910b6136de3f52b1eafa990902d962b14e3031697893264d2eac51fa437bce"} Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.156775 4735 generic.go:334] "Generic (PLEG): container finished" podID="297a2cc5-ec84-4fd0-9bf1-25ccc5062045" containerID="80eadcd670c298d4bb0c29e07867446849ccd009d39db6e90edf2baf3833f029" exitCode=0 Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.156923 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ckmr" event={"ID":"297a2cc5-ec84-4fd0-9bf1-25ccc5062045","Type":"ContainerDied","Data":"80eadcd670c298d4bb0c29e07867446849ccd009d39db6e90edf2baf3833f029"} Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.161555 4735 generic.go:334] "Generic (PLEG): container finished" podID="b7b59344-a72d-41c4-a8af-405990047117" containerID="0b7cea084b689e81ee7e74cd4e5d5b790ec12189cb8e0b35e64e9c45bc78c207" exitCode=0 Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.161639 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzl24" event={"ID":"b7b59344-a72d-41c4-a8af-405990047117","Type":"ContainerDied","Data":"0b7cea084b689e81ee7e74cd4e5d5b790ec12189cb8e0b35e64e9c45bc78c207"} Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.164087 4735 generic.go:334] "Generic (PLEG): container finished" podID="d9c6bba7-8f28-4afe-baa8-f070b3f0acfa" containerID="8a9b6b5c1f95d843ad64b4b3ec2a9b5a0c609d02b15313343324a43d262f2257" exitCode=0 Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.164133 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" event={"ID":"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa","Type":"ContainerDied","Data":"8a9b6b5c1f95d843ad64b4b3ec2a9b5a0c609d02b15313343324a43d262f2257"} Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.164157 4735 scope.go:117] "RemoveContainer" containerID="e76d9bcc7d840a7c255c309b1e86523644a67b721a85e90b556bfb8d8d1bd198" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.167184 4735 generic.go:334] "Generic (PLEG): container finished" podID="b4ba59eb-a71b-467f-a035-9824e10a7ffe" containerID="83562bc2c0908a7791a47689f660f443cb82833b763025069e045f0cc1570632" exitCode=0 Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.167237 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcz6p" event={"ID":"b4ba59eb-a71b-467f-a035-9824e10a7ffe","Type":"ContainerDied","Data":"83562bc2c0908a7791a47689f660f443cb82833b763025069e045f0cc1570632"} Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.329211 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mrxf2" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.392170 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcz6p" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.515867 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ba59eb-a71b-467f-a035-9824e10a7ffe-utilities\") pod \"b4ba59eb-a71b-467f-a035-9824e10a7ffe\" (UID: \"b4ba59eb-a71b-467f-a035-9824e10a7ffe\") " Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.515963 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9f4r\" (UniqueName: \"kubernetes.io/projected/b4ba59eb-a71b-467f-a035-9824e10a7ffe-kube-api-access-h9f4r\") pod \"b4ba59eb-a71b-467f-a035-9824e10a7ffe\" (UID: \"b4ba59eb-a71b-467f-a035-9824e10a7ffe\") " Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.516002 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ba59eb-a71b-467f-a035-9824e10a7ffe-catalog-content\") pod \"b4ba59eb-a71b-467f-a035-9824e10a7ffe\" (UID: \"b4ba59eb-a71b-467f-a035-9824e10a7ffe\") " Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.518043 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4ba59eb-a71b-467f-a035-9824e10a7ffe-utilities" (OuterVolumeSpecName: "utilities") pod "b4ba59eb-a71b-467f-a035-9824e10a7ffe" (UID: "b4ba59eb-a71b-467f-a035-9824e10a7ffe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.521626 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4ba59eb-a71b-467f-a035-9824e10a7ffe-kube-api-access-h9f4r" (OuterVolumeSpecName: "kube-api-access-h9f4r") pod "b4ba59eb-a71b-467f-a035-9824e10a7ffe" (UID: "b4ba59eb-a71b-467f-a035-9824e10a7ffe"). InnerVolumeSpecName "kube-api-access-h9f4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.588601 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4ba59eb-a71b-467f-a035-9824e10a7ffe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4ba59eb-a71b-467f-a035-9824e10a7ffe" (UID: "b4ba59eb-a71b-467f-a035-9824e10a7ffe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.618000 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9f4r\" (UniqueName: \"kubernetes.io/projected/b4ba59eb-a71b-467f-a035-9824e10a7ffe-kube-api-access-h9f4r\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.618030 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ba59eb-a71b-467f-a035-9824e10a7ffe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.618039 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ba59eb-a71b-467f-a035-9824e10a7ffe-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.639327 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z4vz" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.690119 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzl24" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.707285 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ckmr" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.718880 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzxk2\" (UniqueName: \"kubernetes.io/projected/707fe5c2-87d6-4636-9270-b3295925cefd-kube-api-access-lzxk2\") pod \"707fe5c2-87d6-4636-9270-b3295925cefd\" (UID: \"707fe5c2-87d6-4636-9270-b3295925cefd\") " Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.719010 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/707fe5c2-87d6-4636-9270-b3295925cefd-catalog-content\") pod \"707fe5c2-87d6-4636-9270-b3295925cefd\" (UID: \"707fe5c2-87d6-4636-9270-b3295925cefd\") " Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.719069 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/707fe5c2-87d6-4636-9270-b3295925cefd-utilities\") pod \"707fe5c2-87d6-4636-9270-b3295925cefd\" (UID: \"707fe5c2-87d6-4636-9270-b3295925cefd\") " Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.721312 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/707fe5c2-87d6-4636-9270-b3295925cefd-utilities" (OuterVolumeSpecName: "utilities") pod "707fe5c2-87d6-4636-9270-b3295925cefd" (UID: "707fe5c2-87d6-4636-9270-b3295925cefd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.724313 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/707fe5c2-87d6-4636-9270-b3295925cefd-kube-api-access-lzxk2" (OuterVolumeSpecName: "kube-api-access-lzxk2") pod "707fe5c2-87d6-4636-9270-b3295925cefd" (UID: "707fe5c2-87d6-4636-9270-b3295925cefd"). InnerVolumeSpecName "kube-api-access-lzxk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.727136 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.776083 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/707fe5c2-87d6-4636-9270-b3295925cefd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "707fe5c2-87d6-4636-9270-b3295925cefd" (UID: "707fe5c2-87d6-4636-9270-b3295925cefd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.820372 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-marketplace-trusted-ca\") pod \"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa\" (UID: \"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa\") " Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.820442 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-utilities\") pod \"297a2cc5-ec84-4fd0-9bf1-25ccc5062045\" (UID: \"297a2cc5-ec84-4fd0-9bf1-25ccc5062045\") " Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.820508 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-marketplace-operator-metrics\") pod \"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa\" (UID: \"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa\") " Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.820561 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fxkv\" (UniqueName: \"kubernetes.io/projected/b7b59344-a72d-41c4-a8af-405990047117-kube-api-access-5fxkv\") pod \"b7b59344-a72d-41c4-a8af-405990047117\" (UID: \"b7b59344-a72d-41c4-a8af-405990047117\") " Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.820586 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7b59344-a72d-41c4-a8af-405990047117-catalog-content\") pod \"b7b59344-a72d-41c4-a8af-405990047117\" (UID: \"b7b59344-a72d-41c4-a8af-405990047117\") " Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.820625 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-catalog-content\") pod \"297a2cc5-ec84-4fd0-9bf1-25ccc5062045\" (UID: \"297a2cc5-ec84-4fd0-9bf1-25ccc5062045\") " Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.820647 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzqgh\" (UniqueName: \"kubernetes.io/projected/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-kube-api-access-fzqgh\") pod \"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa\" (UID: \"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa\") " Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.820669 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7b59344-a72d-41c4-a8af-405990047117-utilities\") pod \"b7b59344-a72d-41c4-a8af-405990047117\" (UID: \"b7b59344-a72d-41c4-a8af-405990047117\") " Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.820722 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mks7\" (UniqueName: \"kubernetes.io/projected/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-kube-api-access-2mks7\") pod \"297a2cc5-ec84-4fd0-9bf1-25ccc5062045\" (UID: \"297a2cc5-ec84-4fd0-9bf1-25ccc5062045\") " Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.821004 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/707fe5c2-87d6-4636-9270-b3295925cefd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.821033 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/707fe5c2-87d6-4636-9270-b3295925cefd-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.821046 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzxk2\" (UniqueName: \"kubernetes.io/projected/707fe5c2-87d6-4636-9270-b3295925cefd-kube-api-access-lzxk2\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.821196 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d9c6bba7-8f28-4afe-baa8-f070b3f0acfa" (UID: "d9c6bba7-8f28-4afe-baa8-f070b3f0acfa"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.821523 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-utilities" (OuterVolumeSpecName: "utilities") pod "297a2cc5-ec84-4fd0-9bf1-25ccc5062045" (UID: "297a2cc5-ec84-4fd0-9bf1-25ccc5062045"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.822077 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7b59344-a72d-41c4-a8af-405990047117-utilities" (OuterVolumeSpecName: "utilities") pod "b7b59344-a72d-41c4-a8af-405990047117" (UID: "b7b59344-a72d-41c4-a8af-405990047117"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.823477 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b59344-a72d-41c4-a8af-405990047117-kube-api-access-5fxkv" (OuterVolumeSpecName: "kube-api-access-5fxkv") pod "b7b59344-a72d-41c4-a8af-405990047117" (UID: "b7b59344-a72d-41c4-a8af-405990047117"). InnerVolumeSpecName "kube-api-access-5fxkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.823720 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-kube-api-access-2mks7" (OuterVolumeSpecName: "kube-api-access-2mks7") pod "297a2cc5-ec84-4fd0-9bf1-25ccc5062045" (UID: "297a2cc5-ec84-4fd0-9bf1-25ccc5062045"). InnerVolumeSpecName "kube-api-access-2mks7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.824692 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d9c6bba7-8f28-4afe-baa8-f070b3f0acfa" (UID: "d9c6bba7-8f28-4afe-baa8-f070b3f0acfa"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.824891 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-kube-api-access-fzqgh" (OuterVolumeSpecName: "kube-api-access-fzqgh") pod "d9c6bba7-8f28-4afe-baa8-f070b3f0acfa" (UID: "d9c6bba7-8f28-4afe-baa8-f070b3f0acfa"). InnerVolumeSpecName "kube-api-access-fzqgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.844407 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7b59344-a72d-41c4-a8af-405990047117-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7b59344-a72d-41c4-a8af-405990047117" (UID: "b7b59344-a72d-41c4-a8af-405990047117"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.921995 4735 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.922034 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.922047 4735 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.922058 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fxkv\" (UniqueName: \"kubernetes.io/projected/b7b59344-a72d-41c4-a8af-405990047117-kube-api-access-5fxkv\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.922068 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7b59344-a72d-41c4-a8af-405990047117-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.922078 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzqgh\" (UniqueName: \"kubernetes.io/projected/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa-kube-api-access-fzqgh\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.922087 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7b59344-a72d-41c4-a8af-405990047117-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.922097 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mks7\" (UniqueName: \"kubernetes.io/projected/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-kube-api-access-2mks7\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.934680 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "297a2cc5-ec84-4fd0-9bf1-25ccc5062045" (UID: "297a2cc5-ec84-4fd0-9bf1-25ccc5062045"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:48:58 crc kubenswrapper[4735]: I0128 09:48:58.953633 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mrxf2"] Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.023000 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297a2cc5-ec84-4fd0-9bf1-25ccc5062045-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.183731 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ckmr" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.183726 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ckmr" event={"ID":"297a2cc5-ec84-4fd0-9bf1-25ccc5062045","Type":"ContainerDied","Data":"e11b72d7cc2c46c62acc8c28d898c6dc3573e1138aba414e3ba439e20721ccbc"} Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.183973 4735 scope.go:117] "RemoveContainer" containerID="80eadcd670c298d4bb0c29e07867446849ccd009d39db6e90edf2baf3833f029" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.188777 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzl24" event={"ID":"b7b59344-a72d-41c4-a8af-405990047117","Type":"ContainerDied","Data":"926b4313ecf6a5f740c5ca75c35392e08a46e4ff7e91224c7251c0d103679acb"} Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.188925 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzl24" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.199231 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" event={"ID":"d9c6bba7-8f28-4afe-baa8-f070b3f0acfa","Type":"ContainerDied","Data":"cde43b99a43760f8ffc18de62ed169f781305499b2a47c17564850cbd46173e1"} Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.199251 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tkljg" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.203078 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcz6p" event={"ID":"b4ba59eb-a71b-467f-a035-9824e10a7ffe","Type":"ContainerDied","Data":"0400549099c42634d1a0be71d97d17aedf4533f6512d1fd453c98292c954a999"} Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.203222 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcz6p" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.208523 4735 scope.go:117] "RemoveContainer" containerID="463e251f350349fe8530a13fd0a10f7b87fb292c135b2c88bc2bcca7bc8ccb48" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.211025 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z4vz" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.211043 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z4vz" event={"ID":"707fe5c2-87d6-4636-9270-b3295925cefd","Type":"ContainerDied","Data":"1b504004b25102ec1a6d18bd6fa5bf75456bf92e9e2c073c4c15c9380721a118"} Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.214780 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mrxf2" event={"ID":"761659b6-1552-4fd3-8e5a-10bf29a772b3","Type":"ContainerStarted","Data":"9ef0370f4278746a3ae514746f0bbc04d7e017528472256ced696ef3e1d86858"} Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.214829 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mrxf2" event={"ID":"761659b6-1552-4fd3-8e5a-10bf29a772b3","Type":"ContainerStarted","Data":"9fbe8137988b9c228798d1284cafa27d57a828a5129f98734b9c6e1db7c04ef9"} Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.214998 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mrxf2" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.217170 4735 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mrxf2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.217228 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mrxf2" podUID="761659b6-1552-4fd3-8e5a-10bf29a772b3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": dial tcp 10.217.0.66:8080: connect: connection refused" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.242637 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ckmr"] Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.251783 4735 scope.go:117] "RemoveContainer" containerID="11318ff14ea551ad2f5fd52a534c3982eff2b1cf42003004205a5251451c8479" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.261207 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6ckmr"] Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.271495 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzl24"] Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.272502 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzl24"] Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.274492 4735 scope.go:117] "RemoveContainer" containerID="0b7cea084b689e81ee7e74cd4e5d5b790ec12189cb8e0b35e64e9c45bc78c207" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.288995 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mrxf2" podStartSLOduration=2.288380118 podStartE2EDuration="2.288380118s" podCreationTimestamp="2026-01-28 09:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:48:59.279439967 +0000 UTC m=+392.429104330" watchObservedRunningTime="2026-01-28 09:48:59.288380118 +0000 UTC m=+392.438044491" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.293603 4735 scope.go:117] "RemoveContainer" containerID="7faa76d792f955a5031737e123a49e5da4db8d25d64cac27744f3610937dc6dc" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.320255 4735 scope.go:117] "RemoveContainer" containerID="0b093f13f005a1db0955fe85d528a2d29f5137fdc1483af61c55234bcc9773e1" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.322692 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tkljg"] Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.341839 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tkljg"] Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.345672 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lcz6p"] Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.346732 4735 scope.go:117] "RemoveContainer" containerID="8a9b6b5c1f95d843ad64b4b3ec2a9b5a0c609d02b15313343324a43d262f2257" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.349208 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lcz6p"] Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.351579 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4z4vz"] Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.354468 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4z4vz"] Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.363061 4735 scope.go:117] "RemoveContainer" containerID="83562bc2c0908a7791a47689f660f443cb82833b763025069e045f0cc1570632" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.382447 4735 scope.go:117] "RemoveContainer" containerID="5e6baa5b1a5ffc05eff7a8899a247dd4dfd6d1eb07395a64f3c126078d8d34ab" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.399171 4735 scope.go:117] "RemoveContainer" containerID="cf14fd03aec6bded0a770d8a3a29aa12928af2f2375dd0f4711e4a436803c6b4" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.420471 4735 scope.go:117] "RemoveContainer" containerID="31910b6136de3f52b1eafa990902d962b14e3031697893264d2eac51fa437bce" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.444723 4735 scope.go:117] "RemoveContainer" containerID="5cd6c4367ff17e2ce87e0f91ffdddcefbeb90e0c1ddb94e4c7bfdd732eb4cdec" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.464299 4735 scope.go:117] "RemoveContainer" containerID="57572d76ee966148655b99e7d7ae294e67abfa7f6e93fbacd067878cb46f2d06" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.512949 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="297a2cc5-ec84-4fd0-9bf1-25ccc5062045" path="/var/lib/kubelet/pods/297a2cc5-ec84-4fd0-9bf1-25ccc5062045/volumes" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.514306 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="707fe5c2-87d6-4636-9270-b3295925cefd" path="/var/lib/kubelet/pods/707fe5c2-87d6-4636-9270-b3295925cefd/volumes" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.515206 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4ba59eb-a71b-467f-a035-9824e10a7ffe" path="/var/lib/kubelet/pods/b4ba59eb-a71b-467f-a035-9824e10a7ffe/volumes" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.516014 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7b59344-a72d-41c4-a8af-405990047117" path="/var/lib/kubelet/pods/b7b59344-a72d-41c4-a8af-405990047117/volumes" Jan 28 09:48:59 crc kubenswrapper[4735]: I0128 09:48:59.517272 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9c6bba7-8f28-4afe-baa8-f070b3f0acfa" path="/var/lib/kubelet/pods/d9c6bba7-8f28-4afe-baa8-f070b3f0acfa/volumes" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.090827 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s94c6"] Jan 28 09:49:00 crc kubenswrapper[4735]: E0128 09:49:00.091022 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c6bba7-8f28-4afe-baa8-f070b3f0acfa" containerName="marketplace-operator" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091034 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c6bba7-8f28-4afe-baa8-f070b3f0acfa" containerName="marketplace-operator" Jan 28 09:49:00 crc kubenswrapper[4735]: E0128 09:49:00.091043 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297a2cc5-ec84-4fd0-9bf1-25ccc5062045" containerName="extract-content" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091049 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="297a2cc5-ec84-4fd0-9bf1-25ccc5062045" containerName="extract-content" Jan 28 09:49:00 crc kubenswrapper[4735]: E0128 09:49:00.091059 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ba59eb-a71b-467f-a035-9824e10a7ffe" containerName="registry-server" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091065 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ba59eb-a71b-467f-a035-9824e10a7ffe" containerName="registry-server" Jan 28 09:49:00 crc kubenswrapper[4735]: E0128 09:49:00.091073 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707fe5c2-87d6-4636-9270-b3295925cefd" containerName="extract-content" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091080 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="707fe5c2-87d6-4636-9270-b3295925cefd" containerName="extract-content" Jan 28 09:49:00 crc kubenswrapper[4735]: E0128 09:49:00.091089 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707fe5c2-87d6-4636-9270-b3295925cefd" containerName="registry-server" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091094 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="707fe5c2-87d6-4636-9270-b3295925cefd" containerName="registry-server" Jan 28 09:49:00 crc kubenswrapper[4735]: E0128 09:49:00.091104 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ba59eb-a71b-467f-a035-9824e10a7ffe" containerName="extract-utilities" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091109 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ba59eb-a71b-467f-a035-9824e10a7ffe" containerName="extract-utilities" Jan 28 09:49:00 crc kubenswrapper[4735]: E0128 09:49:00.091116 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297a2cc5-ec84-4fd0-9bf1-25ccc5062045" containerName="extract-utilities" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091122 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="297a2cc5-ec84-4fd0-9bf1-25ccc5062045" containerName="extract-utilities" Jan 28 09:49:00 crc kubenswrapper[4735]: E0128 09:49:00.091131 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b59344-a72d-41c4-a8af-405990047117" containerName="registry-server" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091138 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b59344-a72d-41c4-a8af-405990047117" containerName="registry-server" Jan 28 09:49:00 crc kubenswrapper[4735]: E0128 09:49:00.091146 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297a2cc5-ec84-4fd0-9bf1-25ccc5062045" containerName="registry-server" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091152 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="297a2cc5-ec84-4fd0-9bf1-25ccc5062045" containerName="registry-server" Jan 28 09:49:00 crc kubenswrapper[4735]: E0128 09:49:00.091160 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ba59eb-a71b-467f-a035-9824e10a7ffe" containerName="extract-content" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091166 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ba59eb-a71b-467f-a035-9824e10a7ffe" containerName="extract-content" Jan 28 09:49:00 crc kubenswrapper[4735]: E0128 09:49:00.091175 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b59344-a72d-41c4-a8af-405990047117" containerName="extract-content" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091180 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b59344-a72d-41c4-a8af-405990047117" containerName="extract-content" Jan 28 09:49:00 crc kubenswrapper[4735]: E0128 09:49:00.091187 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707fe5c2-87d6-4636-9270-b3295925cefd" containerName="extract-utilities" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091193 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="707fe5c2-87d6-4636-9270-b3295925cefd" containerName="extract-utilities" Jan 28 09:49:00 crc kubenswrapper[4735]: E0128 09:49:00.091199 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b59344-a72d-41c4-a8af-405990047117" containerName="extract-utilities" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091205 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b59344-a72d-41c4-a8af-405990047117" containerName="extract-utilities" Jan 28 09:49:00 crc kubenswrapper[4735]: E0128 09:49:00.091213 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c6bba7-8f28-4afe-baa8-f070b3f0acfa" containerName="marketplace-operator" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091219 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c6bba7-8f28-4afe-baa8-f070b3f0acfa" containerName="marketplace-operator" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091308 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="707fe5c2-87d6-4636-9270-b3295925cefd" containerName="registry-server" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091322 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4ba59eb-a71b-467f-a035-9824e10a7ffe" containerName="registry-server" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091334 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c6bba7-8f28-4afe-baa8-f070b3f0acfa" containerName="marketplace-operator" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091341 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b59344-a72d-41c4-a8af-405990047117" containerName="registry-server" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091350 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c6bba7-8f28-4afe-baa8-f070b3f0acfa" containerName="marketplace-operator" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.091358 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="297a2cc5-ec84-4fd0-9bf1-25ccc5062045" containerName="registry-server" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.092048 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s94c6" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.095570 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.103586 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s94c6"] Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.230013 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mrxf2" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.244499 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f435b834-ada9-4e43-be5a-d7d137c0dda5-utilities\") pod \"redhat-marketplace-s94c6\" (UID: \"f435b834-ada9-4e43-be5a-d7d137c0dda5\") " pod="openshift-marketplace/redhat-marketplace-s94c6" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.244552 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f435b834-ada9-4e43-be5a-d7d137c0dda5-catalog-content\") pod \"redhat-marketplace-s94c6\" (UID: \"f435b834-ada9-4e43-be5a-d7d137c0dda5\") " pod="openshift-marketplace/redhat-marketplace-s94c6" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.244641 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glk4b\" (UniqueName: \"kubernetes.io/projected/f435b834-ada9-4e43-be5a-d7d137c0dda5-kube-api-access-glk4b\") pod \"redhat-marketplace-s94c6\" (UID: \"f435b834-ada9-4e43-be5a-d7d137c0dda5\") " pod="openshift-marketplace/redhat-marketplace-s94c6" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.289299 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hkm7n"] Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.290781 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkm7n" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.292981 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.305637 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hkm7n"] Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.346362 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f435b834-ada9-4e43-be5a-d7d137c0dda5-utilities\") pod \"redhat-marketplace-s94c6\" (UID: \"f435b834-ada9-4e43-be5a-d7d137c0dda5\") " pod="openshift-marketplace/redhat-marketplace-s94c6" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.346711 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-catalog-content\") pod \"redhat-operators-hkm7n\" (UID: \"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f\") " pod="openshift-marketplace/redhat-operators-hkm7n" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.346740 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f435b834-ada9-4e43-be5a-d7d137c0dda5-catalog-content\") pod \"redhat-marketplace-s94c6\" (UID: \"f435b834-ada9-4e43-be5a-d7d137c0dda5\") " pod="openshift-marketplace/redhat-marketplace-s94c6" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.346781 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-utilities\") pod \"redhat-operators-hkm7n\" (UID: \"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f\") " pod="openshift-marketplace/redhat-operators-hkm7n" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.346800 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm5cx\" (UniqueName: \"kubernetes.io/projected/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-kube-api-access-zm5cx\") pod \"redhat-operators-hkm7n\" (UID: \"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f\") " pod="openshift-marketplace/redhat-operators-hkm7n" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.346840 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glk4b\" (UniqueName: \"kubernetes.io/projected/f435b834-ada9-4e43-be5a-d7d137c0dda5-kube-api-access-glk4b\") pod \"redhat-marketplace-s94c6\" (UID: \"f435b834-ada9-4e43-be5a-d7d137c0dda5\") " pod="openshift-marketplace/redhat-marketplace-s94c6" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.347145 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f435b834-ada9-4e43-be5a-d7d137c0dda5-utilities\") pod \"redhat-marketplace-s94c6\" (UID: \"f435b834-ada9-4e43-be5a-d7d137c0dda5\") " pod="openshift-marketplace/redhat-marketplace-s94c6" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.347273 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f435b834-ada9-4e43-be5a-d7d137c0dda5-catalog-content\") pod \"redhat-marketplace-s94c6\" (UID: \"f435b834-ada9-4e43-be5a-d7d137c0dda5\") " pod="openshift-marketplace/redhat-marketplace-s94c6" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.381625 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glk4b\" (UniqueName: \"kubernetes.io/projected/f435b834-ada9-4e43-be5a-d7d137c0dda5-kube-api-access-glk4b\") pod \"redhat-marketplace-s94c6\" (UID: \"f435b834-ada9-4e43-be5a-d7d137c0dda5\") " pod="openshift-marketplace/redhat-marketplace-s94c6" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.411446 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s94c6" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.448406 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-catalog-content\") pod \"redhat-operators-hkm7n\" (UID: \"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f\") " pod="openshift-marketplace/redhat-operators-hkm7n" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.448627 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm5cx\" (UniqueName: \"kubernetes.io/projected/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-kube-api-access-zm5cx\") pod \"redhat-operators-hkm7n\" (UID: \"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f\") " pod="openshift-marketplace/redhat-operators-hkm7n" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.448740 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-utilities\") pod \"redhat-operators-hkm7n\" (UID: \"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f\") " pod="openshift-marketplace/redhat-operators-hkm7n" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.449117 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-utilities\") pod \"redhat-operators-hkm7n\" (UID: \"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f\") " pod="openshift-marketplace/redhat-operators-hkm7n" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.450009 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-catalog-content\") pod \"redhat-operators-hkm7n\" (UID: \"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f\") " pod="openshift-marketplace/redhat-operators-hkm7n" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.468555 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm5cx\" (UniqueName: \"kubernetes.io/projected/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-kube-api-access-zm5cx\") pod \"redhat-operators-hkm7n\" (UID: \"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f\") " pod="openshift-marketplace/redhat-operators-hkm7n" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.608387 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkm7n" Jan 28 09:49:00 crc kubenswrapper[4735]: I0128 09:49:00.883246 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s94c6"] Jan 28 09:49:00 crc kubenswrapper[4735]: W0128 09:49:00.890845 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf435b834_ada9_4e43_be5a_d7d137c0dda5.slice/crio-8c56120b6f8fe4ea33d77ab7d32b8bcd65bd0f86c2ef43d96baf835de35dba80 WatchSource:0}: Error finding container 8c56120b6f8fe4ea33d77ab7d32b8bcd65bd0f86c2ef43d96baf835de35dba80: Status 404 returned error can't find the container with id 8c56120b6f8fe4ea33d77ab7d32b8bcd65bd0f86c2ef43d96baf835de35dba80 Jan 28 09:49:01 crc kubenswrapper[4735]: I0128 09:49:01.005577 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hkm7n"] Jan 28 09:49:01 crc kubenswrapper[4735]: W0128 09:49:01.016222 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dbc89ca_3e14_4e8d_ad36_684f3fd5061f.slice/crio-d3fa1fe0fe03ad2655b8af6bc355fb08090cba9d04b9ed26cf3db9bfba74822d WatchSource:0}: Error finding container d3fa1fe0fe03ad2655b8af6bc355fb08090cba9d04b9ed26cf3db9bfba74822d: Status 404 returned error can't find the container with id d3fa1fe0fe03ad2655b8af6bc355fb08090cba9d04b9ed26cf3db9bfba74822d Jan 28 09:49:01 crc kubenswrapper[4735]: I0128 09:49:01.235385 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkm7n" event={"ID":"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f","Type":"ContainerStarted","Data":"559eb35f659a99c8d14fbb035dbabdb6a6c41123eef68335b55d452ec5d3290e"} Jan 28 09:49:01 crc kubenswrapper[4735]: I0128 09:49:01.235449 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkm7n" event={"ID":"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f","Type":"ContainerStarted","Data":"d3fa1fe0fe03ad2655b8af6bc355fb08090cba9d04b9ed26cf3db9bfba74822d"} Jan 28 09:49:01 crc kubenswrapper[4735]: I0128 09:49:01.237844 4735 generic.go:334] "Generic (PLEG): container finished" podID="f435b834-ada9-4e43-be5a-d7d137c0dda5" containerID="ae321f467712935f87569310f9a8874d3ee28bbf64a81b1c20149f4ccf7273c6" exitCode=0 Jan 28 09:49:01 crc kubenswrapper[4735]: I0128 09:49:01.237952 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s94c6" event={"ID":"f435b834-ada9-4e43-be5a-d7d137c0dda5","Type":"ContainerDied","Data":"ae321f467712935f87569310f9a8874d3ee28bbf64a81b1c20149f4ccf7273c6"} Jan 28 09:49:01 crc kubenswrapper[4735]: I0128 09:49:01.238036 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s94c6" event={"ID":"f435b834-ada9-4e43-be5a-d7d137c0dda5","Type":"ContainerStarted","Data":"8c56120b6f8fe4ea33d77ab7d32b8bcd65bd0f86c2ef43d96baf835de35dba80"} Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.245110 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s94c6" event={"ID":"f435b834-ada9-4e43-be5a-d7d137c0dda5","Type":"ContainerStarted","Data":"935c3d533fbde3f611d2cb51bfc6ac5d5b8b7b8e0b54f8abc418ca0dd94c781d"} Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.247069 4735 generic.go:334] "Generic (PLEG): container finished" podID="9dbc89ca-3e14-4e8d-ad36-684f3fd5061f" containerID="559eb35f659a99c8d14fbb035dbabdb6a6c41123eef68335b55d452ec5d3290e" exitCode=0 Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.247102 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkm7n" event={"ID":"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f","Type":"ContainerDied","Data":"559eb35f659a99c8d14fbb035dbabdb6a6c41123eef68335b55d452ec5d3290e"} Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.486842 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-swb69"] Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.487841 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swb69" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.492529 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.498032 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-swb69"] Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.578675 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hc8w\" (UniqueName: \"kubernetes.io/projected/0ed89506-d111-4bdd-b933-d2325e4b897f-kube-api-access-2hc8w\") pod \"community-operators-swb69\" (UID: \"0ed89506-d111-4bdd-b933-d2325e4b897f\") " pod="openshift-marketplace/community-operators-swb69" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.578735 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ed89506-d111-4bdd-b933-d2325e4b897f-catalog-content\") pod \"community-operators-swb69\" (UID: \"0ed89506-d111-4bdd-b933-d2325e4b897f\") " pod="openshift-marketplace/community-operators-swb69" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.578821 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ed89506-d111-4bdd-b933-d2325e4b897f-utilities\") pod \"community-operators-swb69\" (UID: \"0ed89506-d111-4bdd-b933-d2325e4b897f\") " pod="openshift-marketplace/community-operators-swb69" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.680328 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hc8w\" (UniqueName: \"kubernetes.io/projected/0ed89506-d111-4bdd-b933-d2325e4b897f-kube-api-access-2hc8w\") pod \"community-operators-swb69\" (UID: \"0ed89506-d111-4bdd-b933-d2325e4b897f\") " pod="openshift-marketplace/community-operators-swb69" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.680415 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ed89506-d111-4bdd-b933-d2325e4b897f-catalog-content\") pod \"community-operators-swb69\" (UID: \"0ed89506-d111-4bdd-b933-d2325e4b897f\") " pod="openshift-marketplace/community-operators-swb69" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.680442 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ed89506-d111-4bdd-b933-d2325e4b897f-utilities\") pod \"community-operators-swb69\" (UID: \"0ed89506-d111-4bdd-b933-d2325e4b897f\") " pod="openshift-marketplace/community-operators-swb69" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.681013 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ed89506-d111-4bdd-b933-d2325e4b897f-utilities\") pod \"community-operators-swb69\" (UID: \"0ed89506-d111-4bdd-b933-d2325e4b897f\") " pod="openshift-marketplace/community-operators-swb69" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.681284 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ed89506-d111-4bdd-b933-d2325e4b897f-catalog-content\") pod \"community-operators-swb69\" (UID: \"0ed89506-d111-4bdd-b933-d2325e4b897f\") " pod="openshift-marketplace/community-operators-swb69" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.683674 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x2lhf"] Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.684660 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2lhf" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.686651 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.700953 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x2lhf"] Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.708707 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hc8w\" (UniqueName: \"kubernetes.io/projected/0ed89506-d111-4bdd-b933-d2325e4b897f-kube-api-access-2hc8w\") pod \"community-operators-swb69\" (UID: \"0ed89506-d111-4bdd-b933-d2325e4b897f\") " pod="openshift-marketplace/community-operators-swb69" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.781962 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/578b28be-496b-4970-91ff-75695e48b20f-catalog-content\") pod \"certified-operators-x2lhf\" (UID: \"578b28be-496b-4970-91ff-75695e48b20f\") " pod="openshift-marketplace/certified-operators-x2lhf" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.782048 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/578b28be-496b-4970-91ff-75695e48b20f-utilities\") pod \"certified-operators-x2lhf\" (UID: \"578b28be-496b-4970-91ff-75695e48b20f\") " pod="openshift-marketplace/certified-operators-x2lhf" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.782122 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbkc6\" (UniqueName: \"kubernetes.io/projected/578b28be-496b-4970-91ff-75695e48b20f-kube-api-access-qbkc6\") pod \"certified-operators-x2lhf\" (UID: \"578b28be-496b-4970-91ff-75695e48b20f\") " pod="openshift-marketplace/certified-operators-x2lhf" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.817024 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swb69" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.883520 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/578b28be-496b-4970-91ff-75695e48b20f-utilities\") pod \"certified-operators-x2lhf\" (UID: \"578b28be-496b-4970-91ff-75695e48b20f\") " pod="openshift-marketplace/certified-operators-x2lhf" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.883593 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbkc6\" (UniqueName: \"kubernetes.io/projected/578b28be-496b-4970-91ff-75695e48b20f-kube-api-access-qbkc6\") pod \"certified-operators-x2lhf\" (UID: \"578b28be-496b-4970-91ff-75695e48b20f\") " pod="openshift-marketplace/certified-operators-x2lhf" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.883661 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/578b28be-496b-4970-91ff-75695e48b20f-catalog-content\") pod \"certified-operators-x2lhf\" (UID: \"578b28be-496b-4970-91ff-75695e48b20f\") " pod="openshift-marketplace/certified-operators-x2lhf" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.884514 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/578b28be-496b-4970-91ff-75695e48b20f-catalog-content\") pod \"certified-operators-x2lhf\" (UID: \"578b28be-496b-4970-91ff-75695e48b20f\") " pod="openshift-marketplace/certified-operators-x2lhf" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.884825 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/578b28be-496b-4970-91ff-75695e48b20f-utilities\") pod \"certified-operators-x2lhf\" (UID: \"578b28be-496b-4970-91ff-75695e48b20f\") " pod="openshift-marketplace/certified-operators-x2lhf" Jan 28 09:49:02 crc kubenswrapper[4735]: I0128 09:49:02.904636 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbkc6\" (UniqueName: \"kubernetes.io/projected/578b28be-496b-4970-91ff-75695e48b20f-kube-api-access-qbkc6\") pod \"certified-operators-x2lhf\" (UID: \"578b28be-496b-4970-91ff-75695e48b20f\") " pod="openshift-marketplace/certified-operators-x2lhf" Jan 28 09:49:03 crc kubenswrapper[4735]: I0128 09:49:03.009112 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x2lhf" Jan 28 09:49:03 crc kubenswrapper[4735]: I0128 09:49:03.204280 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-swb69"] Jan 28 09:49:03 crc kubenswrapper[4735]: I0128 09:49:03.254233 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swb69" event={"ID":"0ed89506-d111-4bdd-b933-d2325e4b897f","Type":"ContainerStarted","Data":"f472ac447b06053ec92c192cc08f79f100f2f6b7d163cb5791ac5dda7c80072a"} Jan 28 09:49:03 crc kubenswrapper[4735]: I0128 09:49:03.257011 4735 generic.go:334] "Generic (PLEG): container finished" podID="f435b834-ada9-4e43-be5a-d7d137c0dda5" containerID="935c3d533fbde3f611d2cb51bfc6ac5d5b8b7b8e0b54f8abc418ca0dd94c781d" exitCode=0 Jan 28 09:49:03 crc kubenswrapper[4735]: I0128 09:49:03.257078 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s94c6" event={"ID":"f435b834-ada9-4e43-be5a-d7d137c0dda5","Type":"ContainerDied","Data":"935c3d533fbde3f611d2cb51bfc6ac5d5b8b7b8e0b54f8abc418ca0dd94c781d"} Jan 28 09:49:03 crc kubenswrapper[4735]: I0128 09:49:03.425874 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x2lhf"] Jan 28 09:49:03 crc kubenswrapper[4735]: W0128 09:49:03.431432 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod578b28be_496b_4970_91ff_75695e48b20f.slice/crio-1cdb8c33911ff32309284c53ba59f16a8b445769d657764aceedfc7265425685 WatchSource:0}: Error finding container 1cdb8c33911ff32309284c53ba59f16a8b445769d657764aceedfc7265425685: Status 404 returned error can't find the container with id 1cdb8c33911ff32309284c53ba59f16a8b445769d657764aceedfc7265425685 Jan 28 09:49:04 crc kubenswrapper[4735]: I0128 09:49:04.264383 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s94c6" event={"ID":"f435b834-ada9-4e43-be5a-d7d137c0dda5","Type":"ContainerStarted","Data":"ce769c96a8c098091119b8e074f2bad1e3abd00a5cc0d8e88f5942a38f1af4c7"} Jan 28 09:49:04 crc kubenswrapper[4735]: I0128 09:49:04.266537 4735 generic.go:334] "Generic (PLEG): container finished" podID="0ed89506-d111-4bdd-b933-d2325e4b897f" containerID="054b9f69d0982ce2fd72d2dec3c87f48c9936a55ddd33d977cc056135e0272f0" exitCode=0 Jan 28 09:49:04 crc kubenswrapper[4735]: I0128 09:49:04.266603 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swb69" event={"ID":"0ed89506-d111-4bdd-b933-d2325e4b897f","Type":"ContainerDied","Data":"054b9f69d0982ce2fd72d2dec3c87f48c9936a55ddd33d977cc056135e0272f0"} Jan 28 09:49:04 crc kubenswrapper[4735]: I0128 09:49:04.274971 4735 generic.go:334] "Generic (PLEG): container finished" podID="578b28be-496b-4970-91ff-75695e48b20f" containerID="4e30825f97ad873add5279689ec9661aba34981a4b8dfe3ce87b72e9fea7cdf1" exitCode=0 Jan 28 09:49:04 crc kubenswrapper[4735]: I0128 09:49:04.275047 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2lhf" event={"ID":"578b28be-496b-4970-91ff-75695e48b20f","Type":"ContainerDied","Data":"4e30825f97ad873add5279689ec9661aba34981a4b8dfe3ce87b72e9fea7cdf1"} Jan 28 09:49:04 crc kubenswrapper[4735]: I0128 09:49:04.275322 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2lhf" event={"ID":"578b28be-496b-4970-91ff-75695e48b20f","Type":"ContainerStarted","Data":"1cdb8c33911ff32309284c53ba59f16a8b445769d657764aceedfc7265425685"} Jan 28 09:49:04 crc kubenswrapper[4735]: I0128 09:49:04.277690 4735 generic.go:334] "Generic (PLEG): container finished" podID="9dbc89ca-3e14-4e8d-ad36-684f3fd5061f" containerID="6a885af02a678eb954dc508dd6cac7789432ecbbae4129e2cb5830987743508d" exitCode=0 Jan 28 09:49:04 crc kubenswrapper[4735]: I0128 09:49:04.277723 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkm7n" event={"ID":"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f","Type":"ContainerDied","Data":"6a885af02a678eb954dc508dd6cac7789432ecbbae4129e2cb5830987743508d"} Jan 28 09:49:04 crc kubenswrapper[4735]: I0128 09:49:04.284042 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s94c6" podStartSLOduration=1.7356208199999998 podStartE2EDuration="4.284023286s" podCreationTimestamp="2026-01-28 09:49:00 +0000 UTC" firstStartedPulling="2026-01-28 09:49:01.239727475 +0000 UTC m=+394.389391858" lastFinishedPulling="2026-01-28 09:49:03.788129951 +0000 UTC m=+396.937794324" observedRunningTime="2026-01-28 09:49:04.281186513 +0000 UTC m=+397.430850886" watchObservedRunningTime="2026-01-28 09:49:04.284023286 +0000 UTC m=+397.433687659" Jan 28 09:49:05 crc kubenswrapper[4735]: I0128 09:49:05.285828 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkm7n" event={"ID":"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f","Type":"ContainerStarted","Data":"5a0dbc37fd00dbee807425424b81f0bffe04cdf84684d3fa89af274f28876692"} Jan 28 09:49:05 crc kubenswrapper[4735]: I0128 09:49:05.297644 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swb69" event={"ID":"0ed89506-d111-4bdd-b933-d2325e4b897f","Type":"ContainerStarted","Data":"fae28c5e2c19f12fde54083ae1ae61bf5595624bcdfebac020802415ea603d3a"} Jan 28 09:49:05 crc kubenswrapper[4735]: I0128 09:49:05.302460 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2lhf" event={"ID":"578b28be-496b-4970-91ff-75695e48b20f","Type":"ContainerStarted","Data":"0581866f4243aad46db80d3556f266d0408c8c3e05a180af7259accd9c6eb607"} Jan 28 09:49:05 crc kubenswrapper[4735]: I0128 09:49:05.315386 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hkm7n" podStartSLOduration=2.8502918360000002 podStartE2EDuration="5.315361528s" podCreationTimestamp="2026-01-28 09:49:00 +0000 UTC" firstStartedPulling="2026-01-28 09:49:02.248600167 +0000 UTC m=+395.398264540" lastFinishedPulling="2026-01-28 09:49:04.713669859 +0000 UTC m=+397.863334232" observedRunningTime="2026-01-28 09:49:05.311010645 +0000 UTC m=+398.460675018" watchObservedRunningTime="2026-01-28 09:49:05.315361528 +0000 UTC m=+398.465025911" Jan 28 09:49:05 crc kubenswrapper[4735]: I0128 09:49:05.430170 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 09:49:05 crc kubenswrapper[4735]: I0128 09:49:05.430290 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 09:49:06 crc kubenswrapper[4735]: I0128 09:49:06.310465 4735 generic.go:334] "Generic (PLEG): container finished" podID="0ed89506-d111-4bdd-b933-d2325e4b897f" containerID="fae28c5e2c19f12fde54083ae1ae61bf5595624bcdfebac020802415ea603d3a" exitCode=0 Jan 28 09:49:06 crc kubenswrapper[4735]: I0128 09:49:06.310515 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swb69" event={"ID":"0ed89506-d111-4bdd-b933-d2325e4b897f","Type":"ContainerDied","Data":"fae28c5e2c19f12fde54083ae1ae61bf5595624bcdfebac020802415ea603d3a"} Jan 28 09:49:06 crc kubenswrapper[4735]: I0128 09:49:06.325094 4735 generic.go:334] "Generic (PLEG): container finished" podID="578b28be-496b-4970-91ff-75695e48b20f" containerID="0581866f4243aad46db80d3556f266d0408c8c3e05a180af7259accd9c6eb607" exitCode=0 Jan 28 09:49:06 crc kubenswrapper[4735]: I0128 09:49:06.326138 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2lhf" event={"ID":"578b28be-496b-4970-91ff-75695e48b20f","Type":"ContainerDied","Data":"0581866f4243aad46db80d3556f266d0408c8c3e05a180af7259accd9c6eb607"} Jan 28 09:49:07 crc kubenswrapper[4735]: I0128 09:49:07.332882 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swb69" event={"ID":"0ed89506-d111-4bdd-b933-d2325e4b897f","Type":"ContainerStarted","Data":"b122f8a7ef6fca07e909a8a1e63d4992570cdf8f15b88448225eb13444acc6e3"} Jan 28 09:49:07 crc kubenswrapper[4735]: I0128 09:49:07.334626 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x2lhf" event={"ID":"578b28be-496b-4970-91ff-75695e48b20f","Type":"ContainerStarted","Data":"c7d03d07ae16db10b19c67bffaadff170cd15c38018047d2e46524d6705f5934"} Jan 28 09:49:07 crc kubenswrapper[4735]: I0128 09:49:07.378337 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-swb69" podStartSLOduration=2.857485685 podStartE2EDuration="5.378306979s" podCreationTimestamp="2026-01-28 09:49:02 +0000 UTC" firstStartedPulling="2026-01-28 09:49:04.27257527 +0000 UTC m=+397.422239643" lastFinishedPulling="2026-01-28 09:49:06.793396564 +0000 UTC m=+399.943060937" observedRunningTime="2026-01-28 09:49:07.353858537 +0000 UTC m=+400.503522910" watchObservedRunningTime="2026-01-28 09:49:07.378306979 +0000 UTC m=+400.527971352" Jan 28 09:49:10 crc kubenswrapper[4735]: I0128 09:49:10.411850 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s94c6" Jan 28 09:49:10 crc kubenswrapper[4735]: I0128 09:49:10.412141 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s94c6" Jan 28 09:49:10 crc kubenswrapper[4735]: I0128 09:49:10.466656 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s94c6" Jan 28 09:49:10 crc kubenswrapper[4735]: I0128 09:49:10.484812 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x2lhf" podStartSLOduration=5.778659845 podStartE2EDuration="8.484791987s" podCreationTimestamp="2026-01-28 09:49:02 +0000 UTC" firstStartedPulling="2026-01-28 09:49:04.277084227 +0000 UTC m=+397.426748600" lastFinishedPulling="2026-01-28 09:49:06.983216369 +0000 UTC m=+400.132880742" observedRunningTime="2026-01-28 09:49:07.377108998 +0000 UTC m=+400.526773391" watchObservedRunningTime="2026-01-28 09:49:10.484791987 +0000 UTC m=+403.634456360" Jan 28 09:49:10 crc kubenswrapper[4735]: I0128 09:49:10.608542 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hkm7n" Jan 28 09:49:10 crc kubenswrapper[4735]: I0128 09:49:10.609063 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hkm7n" Jan 28 09:49:10 crc kubenswrapper[4735]: I0128 09:49:10.665346 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hkm7n" Jan 28 09:49:11 crc kubenswrapper[4735]: I0128 09:49:11.405908 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hkm7n" Jan 28 09:49:11 crc kubenswrapper[4735]: I0128 09:49:11.425457 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s94c6" Jan 28 09:49:12 crc kubenswrapper[4735]: I0128 09:49:12.817424 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-swb69" Jan 28 09:49:12 crc kubenswrapper[4735]: I0128 09:49:12.817480 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-swb69" Jan 28 09:49:12 crc kubenswrapper[4735]: I0128 09:49:12.860148 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-swb69" Jan 28 09:49:13 crc kubenswrapper[4735]: I0128 09:49:13.009442 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x2lhf" Jan 28 09:49:13 crc kubenswrapper[4735]: I0128 09:49:13.009550 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x2lhf" Jan 28 09:49:13 crc kubenswrapper[4735]: I0128 09:49:13.057446 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x2lhf" Jan 28 09:49:13 crc kubenswrapper[4735]: I0128 09:49:13.415862 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-swb69" Jan 28 09:49:13 crc kubenswrapper[4735]: I0128 09:49:13.419743 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x2lhf" Jan 28 09:49:17 crc kubenswrapper[4735]: I0128 09:49:17.651946 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" podUID="a005ba34-6174-4e23-9c80-aea87194684a" containerName="registry" containerID="cri-o://7f1db70e4ed124e567ba9354df23223059072afb208d94f6aa00a149b99fd563" gracePeriod=30 Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.277361 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.405580 4735 generic.go:334] "Generic (PLEG): container finished" podID="a005ba34-6174-4e23-9c80-aea87194684a" containerID="7f1db70e4ed124e567ba9354df23223059072afb208d94f6aa00a149b99fd563" exitCode=0 Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.405624 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" event={"ID":"a005ba34-6174-4e23-9c80-aea87194684a","Type":"ContainerDied","Data":"7f1db70e4ed124e567ba9354df23223059072afb208d94f6aa00a149b99fd563"} Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.405653 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" event={"ID":"a005ba34-6174-4e23-9c80-aea87194684a","Type":"ContainerDied","Data":"05fc8be13c40e6958e75cacb8049613a60e6af457fcee50a70f76a7f9ab637aa"} Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.405670 4735 scope.go:117] "RemoveContainer" containerID="7f1db70e4ed124e567ba9354df23223059072afb208d94f6aa00a149b99fd563" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.405786 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.408987 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-bound-sa-token\") pod \"a005ba34-6174-4e23-9c80-aea87194684a\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.409055 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a005ba34-6174-4e23-9c80-aea87194684a-registry-certificates\") pod \"a005ba34-6174-4e23-9c80-aea87194684a\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.409088 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a005ba34-6174-4e23-9c80-aea87194684a-ca-trust-extracted\") pod \"a005ba34-6174-4e23-9c80-aea87194684a\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.409123 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g77fq\" (UniqueName: \"kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-kube-api-access-g77fq\") pod \"a005ba34-6174-4e23-9c80-aea87194684a\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.409145 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-registry-tls\") pod \"a005ba34-6174-4e23-9c80-aea87194684a\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.409244 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"a005ba34-6174-4e23-9c80-aea87194684a\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.409286 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a005ba34-6174-4e23-9c80-aea87194684a-trusted-ca\") pod \"a005ba34-6174-4e23-9c80-aea87194684a\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.409317 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a005ba34-6174-4e23-9c80-aea87194684a-installation-pull-secrets\") pod \"a005ba34-6174-4e23-9c80-aea87194684a\" (UID: \"a005ba34-6174-4e23-9c80-aea87194684a\") " Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.410837 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a005ba34-6174-4e23-9c80-aea87194684a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "a005ba34-6174-4e23-9c80-aea87194684a" (UID: "a005ba34-6174-4e23-9c80-aea87194684a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.412359 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a005ba34-6174-4e23-9c80-aea87194684a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a005ba34-6174-4e23-9c80-aea87194684a" (UID: "a005ba34-6174-4e23-9c80-aea87194684a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.424041 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a005ba34-6174-4e23-9c80-aea87194684a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "a005ba34-6174-4e23-9c80-aea87194684a" (UID: "a005ba34-6174-4e23-9c80-aea87194684a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.424167 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-kube-api-access-g77fq" (OuterVolumeSpecName: "kube-api-access-g77fq") pod "a005ba34-6174-4e23-9c80-aea87194684a" (UID: "a005ba34-6174-4e23-9c80-aea87194684a"). InnerVolumeSpecName "kube-api-access-g77fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.424480 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "a005ba34-6174-4e23-9c80-aea87194684a" (UID: "a005ba34-6174-4e23-9c80-aea87194684a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.424778 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a005ba34-6174-4e23-9c80-aea87194684a" (UID: "a005ba34-6174-4e23-9c80-aea87194684a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.426024 4735 scope.go:117] "RemoveContainer" containerID="7f1db70e4ed124e567ba9354df23223059072afb208d94f6aa00a149b99fd563" Jan 28 09:49:19 crc kubenswrapper[4735]: E0128 09:49:19.426612 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f1db70e4ed124e567ba9354df23223059072afb208d94f6aa00a149b99fd563\": container with ID starting with 7f1db70e4ed124e567ba9354df23223059072afb208d94f6aa00a149b99fd563 not found: ID does not exist" containerID="7f1db70e4ed124e567ba9354df23223059072afb208d94f6aa00a149b99fd563" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.426667 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f1db70e4ed124e567ba9354df23223059072afb208d94f6aa00a149b99fd563"} err="failed to get container status \"7f1db70e4ed124e567ba9354df23223059072afb208d94f6aa00a149b99fd563\": rpc error: code = NotFound desc = could not find container \"7f1db70e4ed124e567ba9354df23223059072afb208d94f6aa00a149b99fd563\": container with ID starting with 7f1db70e4ed124e567ba9354df23223059072afb208d94f6aa00a149b99fd563 not found: ID does not exist" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.427460 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "a005ba34-6174-4e23-9c80-aea87194684a" (UID: "a005ba34-6174-4e23-9c80-aea87194684a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.437511 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a005ba34-6174-4e23-9c80-aea87194684a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "a005ba34-6174-4e23-9c80-aea87194684a" (UID: "a005ba34-6174-4e23-9c80-aea87194684a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.511204 4735 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.511244 4735 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a005ba34-6174-4e23-9c80-aea87194684a-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.511259 4735 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a005ba34-6174-4e23-9c80-aea87194684a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.511272 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g77fq\" (UniqueName: \"kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-kube-api-access-g77fq\") on node \"crc\" DevicePath \"\"" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.511285 4735 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a005ba34-6174-4e23-9c80-aea87194684a-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.511294 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a005ba34-6174-4e23-9c80-aea87194684a-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.511308 4735 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a005ba34-6174-4e23-9c80-aea87194684a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.724444 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-85zqb"] Jan 28 09:49:19 crc kubenswrapper[4735]: I0128 09:49:19.729077 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-85zqb"] Jan 28 09:49:21 crc kubenswrapper[4735]: I0128 09:49:21.511469 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a005ba34-6174-4e23-9c80-aea87194684a" path="/var/lib/kubelet/pods/a005ba34-6174-4e23-9c80-aea87194684a/volumes" Jan 28 09:49:24 crc kubenswrapper[4735]: I0128 09:49:24.120676 4735 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-85zqb container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.16:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 09:49:24 crc kubenswrapper[4735]: I0128 09:49:24.121005 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-85zqb" podUID="a005ba34-6174-4e23-9c80-aea87194684a" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.16:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 09:49:35 crc kubenswrapper[4735]: I0128 09:49:35.430157 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 09:49:35 crc kubenswrapper[4735]: I0128 09:49:35.431423 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 09:49:35 crc kubenswrapper[4735]: I0128 09:49:35.431498 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:49:35 crc kubenswrapper[4735]: I0128 09:49:35.432479 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08c803f74dc0e01fd80ba376f98d82a56fa37d9898555cb46e404071923fd7de"} pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 09:49:35 crc kubenswrapper[4735]: I0128 09:49:35.432572 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" containerID="cri-o://08c803f74dc0e01fd80ba376f98d82a56fa37d9898555cb46e404071923fd7de" gracePeriod=600 Jan 28 09:49:36 crc kubenswrapper[4735]: I0128 09:49:36.536155 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerID="08c803f74dc0e01fd80ba376f98d82a56fa37d9898555cb46e404071923fd7de" exitCode=0 Jan 28 09:49:36 crc kubenswrapper[4735]: I0128 09:49:36.536458 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerDied","Data":"08c803f74dc0e01fd80ba376f98d82a56fa37d9898555cb46e404071923fd7de"} Jan 28 09:49:36 crc kubenswrapper[4735]: I0128 09:49:36.536488 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"cdd11a1295f7313637000d20b5c416225b55aabcfd028387bb27b194be3b0438"} Jan 28 09:49:36 crc kubenswrapper[4735]: I0128 09:49:36.536503 4735 scope.go:117] "RemoveContainer" containerID="d25a2ed6071816880019b0397b75aa44556acc02042cb5160951ed3172890259" Jan 28 09:51:27 crc kubenswrapper[4735]: I0128 09:51:27.770150 4735 scope.go:117] "RemoveContainer" containerID="393a8412db035ac085bd7cc5440518a3da52729c0582895be7a332f6b4147270" Jan 28 09:51:35 crc kubenswrapper[4735]: I0128 09:51:35.429586 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 09:51:35 crc kubenswrapper[4735]: I0128 09:51:35.430221 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 09:52:05 crc kubenswrapper[4735]: I0128 09:52:05.430575 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 09:52:05 crc kubenswrapper[4735]: I0128 09:52:05.431340 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 09:52:35 crc kubenswrapper[4735]: I0128 09:52:35.429975 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 09:52:35 crc kubenswrapper[4735]: I0128 09:52:35.430866 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 09:52:35 crc kubenswrapper[4735]: I0128 09:52:35.430944 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:52:35 crc kubenswrapper[4735]: I0128 09:52:35.431817 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cdd11a1295f7313637000d20b5c416225b55aabcfd028387bb27b194be3b0438"} pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 09:52:35 crc kubenswrapper[4735]: I0128 09:52:35.431918 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" containerID="cri-o://cdd11a1295f7313637000d20b5c416225b55aabcfd028387bb27b194be3b0438" gracePeriod=600 Jan 28 09:52:35 crc kubenswrapper[4735]: I0128 09:52:35.691785 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerID="cdd11a1295f7313637000d20b5c416225b55aabcfd028387bb27b194be3b0438" exitCode=0 Jan 28 09:52:35 crc kubenswrapper[4735]: I0128 09:52:35.691893 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerDied","Data":"cdd11a1295f7313637000d20b5c416225b55aabcfd028387bb27b194be3b0438"} Jan 28 09:52:35 crc kubenswrapper[4735]: I0128 09:52:35.692205 4735 scope.go:117] "RemoveContainer" containerID="08c803f74dc0e01fd80ba376f98d82a56fa37d9898555cb46e404071923fd7de" Jan 28 09:52:36 crc kubenswrapper[4735]: I0128 09:52:36.701025 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"ffb90dbb3d67df0d3c11867f5146dc5bfc3eacdd73311f14c6982a1e91ecde2d"} Jan 28 09:54:20 crc kubenswrapper[4735]: I0128 09:54:20.924455 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-gbqg5"] Jan 28 09:54:20 crc kubenswrapper[4735]: E0128 09:54:20.925268 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a005ba34-6174-4e23-9c80-aea87194684a" containerName="registry" Jan 28 09:54:20 crc kubenswrapper[4735]: I0128 09:54:20.925283 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="a005ba34-6174-4e23-9c80-aea87194684a" containerName="registry" Jan 28 09:54:20 crc kubenswrapper[4735]: I0128 09:54:20.925396 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="a005ba34-6174-4e23-9c80-aea87194684a" containerName="registry" Jan 28 09:54:20 crc kubenswrapper[4735]: I0128 09:54:20.925823 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-gbqg5" Jan 28 09:54:20 crc kubenswrapper[4735]: I0128 09:54:20.928282 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 28 09:54:20 crc kubenswrapper[4735]: I0128 09:54:20.928528 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 28 09:54:20 crc kubenswrapper[4735]: I0128 09:54:20.934594 4735 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-ch52d" Jan 28 09:54:20 crc kubenswrapper[4735]: I0128 09:54:20.938782 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-hw27n"] Jan 28 09:54:20 crc kubenswrapper[4735]: I0128 09:54:20.939386 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-hw27n" Jan 28 09:54:20 crc kubenswrapper[4735]: I0128 09:54:20.942250 4735 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-gpxgf" Jan 28 09:54:20 crc kubenswrapper[4735]: I0128 09:54:20.946329 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-gbqg5"] Jan 28 09:54:20 crc kubenswrapper[4735]: I0128 09:54:20.955252 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-zxd5m"] Jan 28 09:54:20 crc kubenswrapper[4735]: I0128 09:54:20.956261 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-zxd5m" Jan 28 09:54:20 crc kubenswrapper[4735]: I0128 09:54:20.958636 4735 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5drvr" Jan 28 09:54:20 crc kubenswrapper[4735]: I0128 09:54:20.968832 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-hw27n"] Jan 28 09:54:20 crc kubenswrapper[4735]: I0128 09:54:20.974277 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-zxd5m"] Jan 28 09:54:20 crc kubenswrapper[4735]: I0128 09:54:20.992575 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtd9h\" (UniqueName: \"kubernetes.io/projected/6f21432c-a03e-4600-8c86-65159cf299ba-kube-api-access-rtd9h\") pod \"cert-manager-cainjector-cf98fcc89-gbqg5\" (UID: \"6f21432c-a03e-4600-8c86-65159cf299ba\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-gbqg5" Jan 28 09:54:21 crc kubenswrapper[4735]: I0128 09:54:21.094110 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f2sc\" (UniqueName: \"kubernetes.io/projected/cb9ea2bc-7e7e-4b81-ae1e-64ce3ec17177-kube-api-access-8f2sc\") pod \"cert-manager-858654f9db-hw27n\" (UID: \"cb9ea2bc-7e7e-4b81-ae1e-64ce3ec17177\") " pod="cert-manager/cert-manager-858654f9db-hw27n" Jan 28 09:54:21 crc kubenswrapper[4735]: I0128 09:54:21.094181 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtd9h\" (UniqueName: \"kubernetes.io/projected/6f21432c-a03e-4600-8c86-65159cf299ba-kube-api-access-rtd9h\") pod \"cert-manager-cainjector-cf98fcc89-gbqg5\" (UID: \"6f21432c-a03e-4600-8c86-65159cf299ba\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-gbqg5" Jan 28 09:54:21 crc kubenswrapper[4735]: I0128 09:54:21.094389 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42gdt\" (UniqueName: \"kubernetes.io/projected/6d0afe1e-f921-43c9-8f44-3498f0f45b85-kube-api-access-42gdt\") pod \"cert-manager-webhook-687f57d79b-zxd5m\" (UID: \"6d0afe1e-f921-43c9-8f44-3498f0f45b85\") " pod="cert-manager/cert-manager-webhook-687f57d79b-zxd5m" Jan 28 09:54:21 crc kubenswrapper[4735]: I0128 09:54:21.113034 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtd9h\" (UniqueName: \"kubernetes.io/projected/6f21432c-a03e-4600-8c86-65159cf299ba-kube-api-access-rtd9h\") pod \"cert-manager-cainjector-cf98fcc89-gbqg5\" (UID: \"6f21432c-a03e-4600-8c86-65159cf299ba\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-gbqg5" Jan 28 09:54:21 crc kubenswrapper[4735]: I0128 09:54:21.196232 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f2sc\" (UniqueName: \"kubernetes.io/projected/cb9ea2bc-7e7e-4b81-ae1e-64ce3ec17177-kube-api-access-8f2sc\") pod \"cert-manager-858654f9db-hw27n\" (UID: \"cb9ea2bc-7e7e-4b81-ae1e-64ce3ec17177\") " pod="cert-manager/cert-manager-858654f9db-hw27n" Jan 28 09:54:21 crc kubenswrapper[4735]: I0128 09:54:21.196312 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42gdt\" (UniqueName: \"kubernetes.io/projected/6d0afe1e-f921-43c9-8f44-3498f0f45b85-kube-api-access-42gdt\") pod \"cert-manager-webhook-687f57d79b-zxd5m\" (UID: \"6d0afe1e-f921-43c9-8f44-3498f0f45b85\") " pod="cert-manager/cert-manager-webhook-687f57d79b-zxd5m" Jan 28 09:54:21 crc kubenswrapper[4735]: I0128 09:54:21.219153 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42gdt\" (UniqueName: \"kubernetes.io/projected/6d0afe1e-f921-43c9-8f44-3498f0f45b85-kube-api-access-42gdt\") pod \"cert-manager-webhook-687f57d79b-zxd5m\" (UID: \"6d0afe1e-f921-43c9-8f44-3498f0f45b85\") " pod="cert-manager/cert-manager-webhook-687f57d79b-zxd5m" Jan 28 09:54:21 crc kubenswrapper[4735]: I0128 09:54:21.223245 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f2sc\" (UniqueName: \"kubernetes.io/projected/cb9ea2bc-7e7e-4b81-ae1e-64ce3ec17177-kube-api-access-8f2sc\") pod \"cert-manager-858654f9db-hw27n\" (UID: \"cb9ea2bc-7e7e-4b81-ae1e-64ce3ec17177\") " pod="cert-manager/cert-manager-858654f9db-hw27n" Jan 28 09:54:21 crc kubenswrapper[4735]: I0128 09:54:21.250910 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-gbqg5" Jan 28 09:54:21 crc kubenswrapper[4735]: I0128 09:54:21.261371 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-hw27n" Jan 28 09:54:21 crc kubenswrapper[4735]: I0128 09:54:21.272325 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-zxd5m" Jan 28 09:54:21 crc kubenswrapper[4735]: I0128 09:54:21.451435 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-hw27n"] Jan 28 09:54:21 crc kubenswrapper[4735]: I0128 09:54:21.461614 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 09:54:21 crc kubenswrapper[4735]: I0128 09:54:21.495150 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-gbqg5"] Jan 28 09:54:21 crc kubenswrapper[4735]: W0128 09:54:21.504881 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f21432c_a03e_4600_8c86_65159cf299ba.slice/crio-5e9a976f29600b2fcbf3f5a49ede08912f679740b68b7997e08fca41351d843d WatchSource:0}: Error finding container 5e9a976f29600b2fcbf3f5a49ede08912f679740b68b7997e08fca41351d843d: Status 404 returned error can't find the container with id 5e9a976f29600b2fcbf3f5a49ede08912f679740b68b7997e08fca41351d843d Jan 28 09:54:21 crc kubenswrapper[4735]: I0128 09:54:21.523248 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-zxd5m"] Jan 28 09:54:21 crc kubenswrapper[4735]: W0128 09:54:21.525537 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d0afe1e_f921_43c9_8f44_3498f0f45b85.slice/crio-0ffacbc4c48e0c765135d6286bc68bf47a27e304d80efe71e85532556df9e399 WatchSource:0}: Error finding container 0ffacbc4c48e0c765135d6286bc68bf47a27e304d80efe71e85532556df9e399: Status 404 returned error can't find the container with id 0ffacbc4c48e0c765135d6286bc68bf47a27e304d80efe71e85532556df9e399 Jan 28 09:54:22 crc kubenswrapper[4735]: I0128 09:54:22.367809 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-hw27n" event={"ID":"cb9ea2bc-7e7e-4b81-ae1e-64ce3ec17177","Type":"ContainerStarted","Data":"d5386ebb470629d01bdd2841a9ef15dd900ac668ba85e12cdd6946cb8699febb"} Jan 28 09:54:22 crc kubenswrapper[4735]: I0128 09:54:22.370597 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-gbqg5" event={"ID":"6f21432c-a03e-4600-8c86-65159cf299ba","Type":"ContainerStarted","Data":"5e9a976f29600b2fcbf3f5a49ede08912f679740b68b7997e08fca41351d843d"} Jan 28 09:54:22 crc kubenswrapper[4735]: I0128 09:54:22.372239 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-zxd5m" event={"ID":"6d0afe1e-f921-43c9-8f44-3498f0f45b85","Type":"ContainerStarted","Data":"0ffacbc4c48e0c765135d6286bc68bf47a27e304d80efe71e85532556df9e399"} Jan 28 09:54:27 crc kubenswrapper[4735]: I0128 09:54:27.403466 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-hw27n" event={"ID":"cb9ea2bc-7e7e-4b81-ae1e-64ce3ec17177","Type":"ContainerStarted","Data":"9791a55a3a044f400ce807e8ef30419af54b43c8d5b92ec6d469004d9878ee47"} Jan 28 09:54:27 crc kubenswrapper[4735]: I0128 09:54:27.406700 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-gbqg5" event={"ID":"6f21432c-a03e-4600-8c86-65159cf299ba","Type":"ContainerStarted","Data":"5382c49783e1a93b4bb23a3c1bf8a5f59e322cb5c1a1a6e8688ee7074d17ceed"} Jan 28 09:54:27 crc kubenswrapper[4735]: I0128 09:54:27.410095 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-zxd5m" event={"ID":"6d0afe1e-f921-43c9-8f44-3498f0f45b85","Type":"ContainerStarted","Data":"bd83bbd81ff77c6e22cb59045f26c9029b4fde0a2d36257bbce2f0625aa7926c"} Jan 28 09:54:27 crc kubenswrapper[4735]: I0128 09:54:27.410795 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-zxd5m" Jan 28 09:54:27 crc kubenswrapper[4735]: I0128 09:54:27.454420 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-hw27n" podStartSLOduration=2.724404747 podStartE2EDuration="7.454404251s" podCreationTimestamp="2026-01-28 09:54:20 +0000 UTC" firstStartedPulling="2026-01-28 09:54:21.461363705 +0000 UTC m=+714.611028078" lastFinishedPulling="2026-01-28 09:54:26.191363209 +0000 UTC m=+719.341027582" observedRunningTime="2026-01-28 09:54:27.43600228 +0000 UTC m=+720.585666663" watchObservedRunningTime="2026-01-28 09:54:27.454404251 +0000 UTC m=+720.604068624" Jan 28 09:54:27 crc kubenswrapper[4735]: I0128 09:54:27.480609 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-gbqg5" podStartSLOduration=2.750655795 podStartE2EDuration="7.480593788s" podCreationTimestamp="2026-01-28 09:54:20 +0000 UTC" firstStartedPulling="2026-01-28 09:54:21.508016665 +0000 UTC m=+714.657681038" lastFinishedPulling="2026-01-28 09:54:26.237954658 +0000 UTC m=+719.387619031" observedRunningTime="2026-01-28 09:54:27.456858433 +0000 UTC m=+720.606522806" watchObservedRunningTime="2026-01-28 09:54:27.480593788 +0000 UTC m=+720.630258161" Jan 28 09:54:27 crc kubenswrapper[4735]: I0128 09:54:27.481526 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-zxd5m" podStartSLOduration=2.818976739 podStartE2EDuration="7.481515331s" podCreationTimestamp="2026-01-28 09:54:20 +0000 UTC" firstStartedPulling="2026-01-28 09:54:21.527137575 +0000 UTC m=+714.676801948" lastFinishedPulling="2026-01-28 09:54:26.189676127 +0000 UTC m=+719.339340540" observedRunningTime="2026-01-28 09:54:27.4754875 +0000 UTC m=+720.625151903" watchObservedRunningTime="2026-01-28 09:54:27.481515331 +0000 UTC m=+720.631179724" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.379427 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6fwtd"] Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.380039 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovn-controller" containerID="cri-o://2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a" gracePeriod=30 Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.380397 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="sbdb" containerID="cri-o://c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa" gracePeriod=30 Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.380433 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="nbdb" containerID="cri-o://b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349" gracePeriod=30 Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.380472 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="northd" containerID="cri-o://ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe" gracePeriod=30 Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.380502 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9" gracePeriod=30 Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.380529 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="kube-rbac-proxy-node" containerID="cri-o://0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30" gracePeriod=30 Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.380556 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovn-acl-logging" containerID="cri-o://2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe" gracePeriod=30 Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.460611 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovnkube-controller" containerID="cri-o://1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262" gracePeriod=30 Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.729686 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovnkube-controller/3.log" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.732286 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovn-acl-logging/0.log" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.732873 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovn-controller/0.log" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.733461 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785109 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8ztfk"] Jan 28 09:54:30 crc kubenswrapper[4735]: E0128 09:54:30.785477 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="kube-rbac-proxy-node" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785505 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="kube-rbac-proxy-node" Jan 28 09:54:30 crc kubenswrapper[4735]: E0128 09:54:30.785518 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovn-acl-logging" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785525 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovn-acl-logging" Jan 28 09:54:30 crc kubenswrapper[4735]: E0128 09:54:30.785543 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="nbdb" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785551 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="nbdb" Jan 28 09:54:30 crc kubenswrapper[4735]: E0128 09:54:30.785563 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="kubecfg-setup" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785570 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="kubecfg-setup" Jan 28 09:54:30 crc kubenswrapper[4735]: E0128 09:54:30.785579 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovnkube-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785586 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovnkube-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: E0128 09:54:30.785593 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovn-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785600 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovn-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: E0128 09:54:30.785611 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="northd" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785618 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="northd" Jan 28 09:54:30 crc kubenswrapper[4735]: E0128 09:54:30.785629 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovnkube-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785636 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovnkube-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: E0128 09:54:30.785645 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovnkube-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785652 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovnkube-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: E0128 09:54:30.785660 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="sbdb" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785667 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="sbdb" Jan 28 09:54:30 crc kubenswrapper[4735]: E0128 09:54:30.785675 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovnkube-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785684 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovnkube-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: E0128 09:54:30.785698 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785705 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785873 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="nbdb" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785889 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovnkube-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785897 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785907 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovnkube-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785917 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="sbdb" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785926 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovn-acl-logging" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785936 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="kube-rbac-proxy-node" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785943 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="northd" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785953 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovn-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.785960 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovnkube-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: E0128 09:54:30.786059 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovnkube-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.786066 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovnkube-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.786169 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovnkube-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.786180 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerName="ovnkube-controller" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.788564 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.819841 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-systemd\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.819900 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-ovnkube-config\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.819940 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.819960 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-openvswitch\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.819995 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-slash\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820024 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnrct\" (UniqueName: \"kubernetes.io/projected/9a0aebf3-b579-4254-9f8c-7163984836f9-kube-api-access-lnrct\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820050 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-etc-openvswitch\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820084 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-log-socket\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820113 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-ovnkube-script-lib\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820150 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-cni-netd\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820171 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-cni-bin\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820203 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-run-netns\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820225 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-env-overrides\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820257 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-node-log\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820283 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-ovn\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820301 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-kubelet\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820325 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-run-ovn-kubernetes\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820354 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-var-lib-openvswitch\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820341 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820365 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820380 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a0aebf3-b579-4254-9f8c-7163984836f9-ovn-node-metrics-cert\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820420 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820429 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820466 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-systemd-units\") pod \"9a0aebf3-b579-4254-9f8c-7163984836f9\" (UID: \"9a0aebf3-b579-4254-9f8c-7163984836f9\") " Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820492 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820522 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820550 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820577 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820621 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-node-log" (OuterVolumeSpecName: "node-log") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820655 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820689 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820912 4735 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820934 4735 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820948 4735 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820965 4735 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820977 4735 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.821021 4735 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820949 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.820364 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.821033 4735 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.821084 4735 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-node-log\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.821099 4735 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.821111 4735 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.821122 4735 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.821258 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.821292 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-log-socket" (OuterVolumeSpecName: "log-socket") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.821507 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.821541 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-slash" (OuterVolumeSpecName: "host-slash") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.825859 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a0aebf3-b579-4254-9f8c-7163984836f9-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.825857 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a0aebf3-b579-4254-9f8c-7163984836f9-kube-api-access-lnrct" (OuterVolumeSpecName: "kube-api-access-lnrct") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "kube-api-access-lnrct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.832991 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "9a0aebf3-b579-4254-9f8c-7163984836f9" (UID: "9a0aebf3-b579-4254-9f8c-7163984836f9"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.921821 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-var-lib-openvswitch\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.921867 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-log-socket\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.921887 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-run-systemd\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.921900 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-cni-bin\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.921915 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c7e69b5-afe5-4020-a356-eea129781094-ovnkube-config\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.921933 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-cni-netd\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.921950 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9j4q\" (UniqueName: \"kubernetes.io/projected/8c7e69b5-afe5-4020-a356-eea129781094-kube-api-access-d9j4q\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922156 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-run-ovn\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922228 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c7e69b5-afe5-4020-a356-eea129781094-ovn-node-metrics-cert\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922301 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-node-log\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922341 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-kubelet\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922360 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922382 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c7e69b5-afe5-4020-a356-eea129781094-ovnkube-script-lib\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922399 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-run-ovn-kubernetes\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922455 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-run-netns\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922477 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-systemd-units\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922499 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-slash\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922512 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-etc-openvswitch\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922542 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-run-openvswitch\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922563 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c7e69b5-afe5-4020-a356-eea129781094-env-overrides\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922596 4735 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922608 4735 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922625 4735 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922634 4735 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-host-slash\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922650 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnrct\" (UniqueName: \"kubernetes.io/projected/9a0aebf3-b579-4254-9f8c-7163984836f9-kube-api-access-lnrct\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922659 4735 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a0aebf3-b579-4254-9f8c-7163984836f9-log-socket\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922666 4735 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922675 4735 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a0aebf3-b579-4254-9f8c-7163984836f9-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:30 crc kubenswrapper[4735]: I0128 09:54:30.922683 4735 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a0aebf3-b579-4254-9f8c-7163984836f9-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024327 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c7e69b5-afe5-4020-a356-eea129781094-env-overrides\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024430 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-var-lib-openvswitch\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024474 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-log-socket\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024509 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-run-systemd\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024550 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-cni-bin\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024581 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c7e69b5-afe5-4020-a356-eea129781094-ovnkube-config\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024598 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-run-systemd\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024552 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-log-socket\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024511 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-var-lib-openvswitch\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024667 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-cni-netd\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024632 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-cni-netd\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024785 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9j4q\" (UniqueName: \"kubernetes.io/projected/8c7e69b5-afe5-4020-a356-eea129781094-kube-api-access-d9j4q\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024790 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-cni-bin\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024826 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-run-ovn\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024858 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c7e69b5-afe5-4020-a356-eea129781094-ovn-node-metrics-cert\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024885 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-run-ovn\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024900 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-node-log\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024935 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-kubelet\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.024971 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025006 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c7e69b5-afe5-4020-a356-eea129781094-ovnkube-script-lib\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025035 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-run-ovn-kubernetes\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025066 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-run-ovn-kubernetes\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025037 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025008 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-kubelet\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025070 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-run-netns\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025104 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-run-netns\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025024 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-node-log\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025131 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-systemd-units\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025157 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-systemd-units\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025170 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-slash\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025201 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-etc-openvswitch\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025242 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-run-openvswitch\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025256 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-host-slash\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025303 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c7e69b5-afe5-4020-a356-eea129781094-env-overrides\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025326 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-etc-openvswitch\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025368 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e69b5-afe5-4020-a356-eea129781094-run-openvswitch\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025877 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c7e69b5-afe5-4020-a356-eea129781094-ovnkube-script-lib\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.025937 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c7e69b5-afe5-4020-a356-eea129781094-ovnkube-config\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.028255 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c7e69b5-afe5-4020-a356-eea129781094-ovn-node-metrics-cert\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.041502 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9j4q\" (UniqueName: \"kubernetes.io/projected/8c7e69b5-afe5-4020-a356-eea129781094-kube-api-access-d9j4q\") pod \"ovnkube-node-8ztfk\" (UID: \"8c7e69b5-afe5-4020-a356-eea129781094\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.107595 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:31 crc kubenswrapper[4735]: W0128 09:54:31.141584 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c7e69b5_afe5_4020_a356_eea129781094.slice/crio-12a07a697d4e480e717f76e365c8358c535d194c6d2117be1a08a68bb6ea0418 WatchSource:0}: Error finding container 12a07a697d4e480e717f76e365c8358c535d194c6d2117be1a08a68bb6ea0418: Status 404 returned error can't find the container with id 12a07a697d4e480e717f76e365c8358c535d194c6d2117be1a08a68bb6ea0418 Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.275772 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-zxd5m" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.458547 4735 generic.go:334] "Generic (PLEG): container finished" podID="8c7e69b5-afe5-4020-a356-eea129781094" containerID="1095e1ba57f29cce0f7669952cfebb8ec64a2725037736de7c137dbd0a4cef10" exitCode=0 Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.458646 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" event={"ID":"8c7e69b5-afe5-4020-a356-eea129781094","Type":"ContainerDied","Data":"1095e1ba57f29cce0f7669952cfebb8ec64a2725037736de7c137dbd0a4cef10"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.458739 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" event={"ID":"8c7e69b5-afe5-4020-a356-eea129781094","Type":"ContainerStarted","Data":"12a07a697d4e480e717f76e365c8358c535d194c6d2117be1a08a68bb6ea0418"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.464783 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovnkube-controller/3.log" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.468107 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovn-acl-logging/0.log" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.468639 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6fwtd_9a0aebf3-b579-4254-9f8c-7163984836f9/ovn-controller/0.log" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469104 4735 generic.go:334] "Generic (PLEG): container finished" podID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerID="1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262" exitCode=0 Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469132 4735 generic.go:334] "Generic (PLEG): container finished" podID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerID="c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa" exitCode=0 Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469142 4735 generic.go:334] "Generic (PLEG): container finished" podID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerID="b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349" exitCode=0 Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469152 4735 generic.go:334] "Generic (PLEG): container finished" podID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerID="ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe" exitCode=0 Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469161 4735 generic.go:334] "Generic (PLEG): container finished" podID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerID="5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9" exitCode=0 Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469171 4735 generic.go:334] "Generic (PLEG): container finished" podID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerID="0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30" exitCode=0 Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469180 4735 generic.go:334] "Generic (PLEG): container finished" podID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerID="2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe" exitCode=143 Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469189 4735 generic.go:334] "Generic (PLEG): container finished" podID="9a0aebf3-b579-4254-9f8c-7163984836f9" containerID="2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a" exitCode=143 Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469239 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469811 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerDied","Data":"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469850 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerDied","Data":"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469872 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerDied","Data":"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469885 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerDied","Data":"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469898 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerDied","Data":"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469911 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerDied","Data":"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469924 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469935 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469944 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469952 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469961 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469969 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469977 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469984 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.469991 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470003 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerDied","Data":"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470013 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470021 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470029 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470037 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470044 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470052 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470060 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470067 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470075 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470082 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470092 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerDied","Data":"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470102 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470115 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470123 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470130 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470137 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470144 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470151 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470159 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470166 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470173 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470183 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6fwtd" event={"ID":"9a0aebf3-b579-4254-9f8c-7163984836f9","Type":"ContainerDied","Data":"f29c1a4afbdb817451750963a72f2b4574c81de917f95e2370193882fa3372f8"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470193 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470201 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470209 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470216 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470223 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470230 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470237 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470245 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470252 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470260 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.470277 4735 scope.go:117] "RemoveContainer" containerID="1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.473038 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jxbwx_c50beb21-41ab-474f-90d2-ba27152379cf/kube-multus/2.log" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.474047 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jxbwx_c50beb21-41ab-474f-90d2-ba27152379cf/kube-multus/1.log" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.474085 4735 generic.go:334] "Generic (PLEG): container finished" podID="c50beb21-41ab-474f-90d2-ba27152379cf" containerID="3a9483e690e02a0014915fbf7e27f090be84b506983548028df8cf697a5112a6" exitCode=2 Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.474107 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jxbwx" event={"ID":"c50beb21-41ab-474f-90d2-ba27152379cf","Type":"ContainerDied","Data":"3a9483e690e02a0014915fbf7e27f090be84b506983548028df8cf697a5112a6"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.474123 4735 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c6c98ea668fd6d902a3e7435fed58b93cc58a04bb99900782a0e943bc253a42d"} Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.474504 4735 scope.go:117] "RemoveContainer" containerID="3a9483e690e02a0014915fbf7e27f090be84b506983548028df8cf697a5112a6" Jan 28 09:54:31 crc kubenswrapper[4735]: E0128 09:54:31.474702 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-jxbwx_openshift-multus(c50beb21-41ab-474f-90d2-ba27152379cf)\"" pod="openshift-multus/multus-jxbwx" podUID="c50beb21-41ab-474f-90d2-ba27152379cf" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.493482 4735 scope.go:117] "RemoveContainer" containerID="e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.522448 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6fwtd"] Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.530977 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6fwtd"] Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.556129 4735 scope.go:117] "RemoveContainer" containerID="c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.588388 4735 scope.go:117] "RemoveContainer" containerID="b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.613156 4735 scope.go:117] "RemoveContainer" containerID="ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.626707 4735 scope.go:117] "RemoveContainer" containerID="5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.640716 4735 scope.go:117] "RemoveContainer" containerID="0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.665465 4735 scope.go:117] "RemoveContainer" containerID="2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.683981 4735 scope.go:117] "RemoveContainer" containerID="2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.701696 4735 scope.go:117] "RemoveContainer" containerID="55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.726714 4735 scope.go:117] "RemoveContainer" containerID="1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262" Jan 28 09:54:31 crc kubenswrapper[4735]: E0128 09:54:31.727317 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262\": container with ID starting with 1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262 not found: ID does not exist" containerID="1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.727383 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262"} err="failed to get container status \"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262\": rpc error: code = NotFound desc = could not find container \"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262\": container with ID starting with 1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.727426 4735 scope.go:117] "RemoveContainer" containerID="e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061" Jan 28 09:54:31 crc kubenswrapper[4735]: E0128 09:54:31.728043 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\": container with ID starting with e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061 not found: ID does not exist" containerID="e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.728079 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061"} err="failed to get container status \"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\": rpc error: code = NotFound desc = could not find container \"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\": container with ID starting with e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.728106 4735 scope.go:117] "RemoveContainer" containerID="c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa" Jan 28 09:54:31 crc kubenswrapper[4735]: E0128 09:54:31.728372 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\": container with ID starting with c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa not found: ID does not exist" containerID="c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.728410 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa"} err="failed to get container status \"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\": rpc error: code = NotFound desc = could not find container \"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\": container with ID starting with c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.728428 4735 scope.go:117] "RemoveContainer" containerID="b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349" Jan 28 09:54:31 crc kubenswrapper[4735]: E0128 09:54:31.728792 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\": container with ID starting with b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349 not found: ID does not exist" containerID="b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.728826 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349"} err="failed to get container status \"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\": rpc error: code = NotFound desc = could not find container \"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\": container with ID starting with b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.728849 4735 scope.go:117] "RemoveContainer" containerID="ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe" Jan 28 09:54:31 crc kubenswrapper[4735]: E0128 09:54:31.729106 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\": container with ID starting with ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe not found: ID does not exist" containerID="ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.729133 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe"} err="failed to get container status \"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\": rpc error: code = NotFound desc = could not find container \"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\": container with ID starting with ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.729151 4735 scope.go:117] "RemoveContainer" containerID="5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9" Jan 28 09:54:31 crc kubenswrapper[4735]: E0128 09:54:31.729475 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\": container with ID starting with 5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9 not found: ID does not exist" containerID="5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.729497 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9"} err="failed to get container status \"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\": rpc error: code = NotFound desc = could not find container \"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\": container with ID starting with 5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.729510 4735 scope.go:117] "RemoveContainer" containerID="0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30" Jan 28 09:54:31 crc kubenswrapper[4735]: E0128 09:54:31.729772 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\": container with ID starting with 0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30 not found: ID does not exist" containerID="0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.729793 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30"} err="failed to get container status \"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\": rpc error: code = NotFound desc = could not find container \"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\": container with ID starting with 0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.729806 4735 scope.go:117] "RemoveContainer" containerID="2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe" Jan 28 09:54:31 crc kubenswrapper[4735]: E0128 09:54:31.730072 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\": container with ID starting with 2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe not found: ID does not exist" containerID="2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.730093 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe"} err="failed to get container status \"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\": rpc error: code = NotFound desc = could not find container \"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\": container with ID starting with 2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.730110 4735 scope.go:117] "RemoveContainer" containerID="2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a" Jan 28 09:54:31 crc kubenswrapper[4735]: E0128 09:54:31.730496 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\": container with ID starting with 2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a not found: ID does not exist" containerID="2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.730563 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a"} err="failed to get container status \"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\": rpc error: code = NotFound desc = could not find container \"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\": container with ID starting with 2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.730606 4735 scope.go:117] "RemoveContainer" containerID="55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb" Jan 28 09:54:31 crc kubenswrapper[4735]: E0128 09:54:31.731029 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\": container with ID starting with 55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb not found: ID does not exist" containerID="55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.731056 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb"} err="failed to get container status \"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\": rpc error: code = NotFound desc = could not find container \"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\": container with ID starting with 55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.731072 4735 scope.go:117] "RemoveContainer" containerID="1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.731327 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262"} err="failed to get container status \"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262\": rpc error: code = NotFound desc = could not find container \"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262\": container with ID starting with 1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.731365 4735 scope.go:117] "RemoveContainer" containerID="e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.731740 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061"} err="failed to get container status \"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\": rpc error: code = NotFound desc = could not find container \"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\": container with ID starting with e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.731785 4735 scope.go:117] "RemoveContainer" containerID="c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.732128 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa"} err="failed to get container status \"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\": rpc error: code = NotFound desc = could not find container \"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\": container with ID starting with c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.732405 4735 scope.go:117] "RemoveContainer" containerID="b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.732872 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349"} err="failed to get container status \"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\": rpc error: code = NotFound desc = could not find container \"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\": container with ID starting with b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.732902 4735 scope.go:117] "RemoveContainer" containerID="ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.733383 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe"} err="failed to get container status \"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\": rpc error: code = NotFound desc = could not find container \"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\": container with ID starting with ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.733422 4735 scope.go:117] "RemoveContainer" containerID="5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.733866 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9"} err="failed to get container status \"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\": rpc error: code = NotFound desc = could not find container \"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\": container with ID starting with 5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.733954 4735 scope.go:117] "RemoveContainer" containerID="0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.734304 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30"} err="failed to get container status \"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\": rpc error: code = NotFound desc = could not find container \"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\": container with ID starting with 0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.734342 4735 scope.go:117] "RemoveContainer" containerID="2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.734620 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe"} err="failed to get container status \"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\": rpc error: code = NotFound desc = could not find container \"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\": container with ID starting with 2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.734668 4735 scope.go:117] "RemoveContainer" containerID="2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.734997 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a"} err="failed to get container status \"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\": rpc error: code = NotFound desc = could not find container \"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\": container with ID starting with 2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.735033 4735 scope.go:117] "RemoveContainer" containerID="55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.735299 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb"} err="failed to get container status \"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\": rpc error: code = NotFound desc = could not find container \"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\": container with ID starting with 55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.735347 4735 scope.go:117] "RemoveContainer" containerID="1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.735616 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262"} err="failed to get container status \"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262\": rpc error: code = NotFound desc = could not find container \"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262\": container with ID starting with 1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.735653 4735 scope.go:117] "RemoveContainer" containerID="e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.735942 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061"} err="failed to get container status \"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\": rpc error: code = NotFound desc = could not find container \"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\": container with ID starting with e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.735970 4735 scope.go:117] "RemoveContainer" containerID="c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.737036 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa"} err="failed to get container status \"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\": rpc error: code = NotFound desc = could not find container \"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\": container with ID starting with c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.737077 4735 scope.go:117] "RemoveContainer" containerID="b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.737365 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349"} err="failed to get container status \"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\": rpc error: code = NotFound desc = could not find container \"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\": container with ID starting with b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.737394 4735 scope.go:117] "RemoveContainer" containerID="ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.737761 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe"} err="failed to get container status \"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\": rpc error: code = NotFound desc = could not find container \"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\": container with ID starting with ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.737784 4735 scope.go:117] "RemoveContainer" containerID="5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.738141 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9"} err="failed to get container status \"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\": rpc error: code = NotFound desc = could not find container \"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\": container with ID starting with 5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.738162 4735 scope.go:117] "RemoveContainer" containerID="0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.738413 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30"} err="failed to get container status \"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\": rpc error: code = NotFound desc = could not find container \"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\": container with ID starting with 0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.738438 4735 scope.go:117] "RemoveContainer" containerID="2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.738728 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe"} err="failed to get container status \"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\": rpc error: code = NotFound desc = could not find container \"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\": container with ID starting with 2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.738800 4735 scope.go:117] "RemoveContainer" containerID="2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.739043 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a"} err="failed to get container status \"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\": rpc error: code = NotFound desc = could not find container \"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\": container with ID starting with 2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.739066 4735 scope.go:117] "RemoveContainer" containerID="55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.739300 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb"} err="failed to get container status \"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\": rpc error: code = NotFound desc = could not find container \"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\": container with ID starting with 55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.739330 4735 scope.go:117] "RemoveContainer" containerID="1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.739553 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262"} err="failed to get container status \"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262\": rpc error: code = NotFound desc = could not find container \"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262\": container with ID starting with 1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.739570 4735 scope.go:117] "RemoveContainer" containerID="e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.739770 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061"} err="failed to get container status \"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\": rpc error: code = NotFound desc = could not find container \"e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061\": container with ID starting with e1e7993c443ab38e8e06db73952615fe8d12422dfc2613e0f70e9247303fc061 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.739793 4735 scope.go:117] "RemoveContainer" containerID="c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.740046 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa"} err="failed to get container status \"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\": rpc error: code = NotFound desc = could not find container \"c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa\": container with ID starting with c2b658aebfcd46b6419bf39e41bb9a09fb1403ce933366812271c8a43d896caa not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.740081 4735 scope.go:117] "RemoveContainer" containerID="b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.740360 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349"} err="failed to get container status \"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\": rpc error: code = NotFound desc = could not find container \"b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349\": container with ID starting with b13244eb404a0087da8b9c34e54f15171c4d227f6f2e855cd04ddc86d98a5349 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.740383 4735 scope.go:117] "RemoveContainer" containerID="ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.740623 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe"} err="failed to get container status \"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\": rpc error: code = NotFound desc = could not find container \"ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe\": container with ID starting with ee5ce145fe9c0cf7412071a4c266d1a32cb3e215ece2e5eb078eae5ba45b28fe not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.740659 4735 scope.go:117] "RemoveContainer" containerID="5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.740907 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9"} err="failed to get container status \"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\": rpc error: code = NotFound desc = could not find container \"5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9\": container with ID starting with 5fa4a7a4b1c00c02db8e5aa60e67895a09e0ca27d13cc1b8fbb8ebad6abba0b9 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.740927 4735 scope.go:117] "RemoveContainer" containerID="0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.741198 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30"} err="failed to get container status \"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\": rpc error: code = NotFound desc = could not find container \"0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30\": container with ID starting with 0f1bf5ef25153e09dae2e353975e83d8e4c8a69496e5bb630304c9a20fe3bf30 not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.741282 4735 scope.go:117] "RemoveContainer" containerID="2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.741536 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe"} err="failed to get container status \"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\": rpc error: code = NotFound desc = could not find container \"2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe\": container with ID starting with 2acfd67ec9ab05551225b80f8a691be5b77438dd11b89a80f6a3b428df1497fe not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.741557 4735 scope.go:117] "RemoveContainer" containerID="2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.741811 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a"} err="failed to get container status \"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\": rpc error: code = NotFound desc = could not find container \"2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a\": container with ID starting with 2aee00efb80e3f41faee9889464216cc2b3fefd35abb72f011f301280a43bd4a not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.741844 4735 scope.go:117] "RemoveContainer" containerID="55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.742164 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb"} err="failed to get container status \"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\": rpc error: code = NotFound desc = could not find container \"55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb\": container with ID starting with 55468c6baebf5047a0e369859cd4a170904342c3255c51c706ad464e81001ecb not found: ID does not exist" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.742200 4735 scope.go:117] "RemoveContainer" containerID="1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262" Jan 28 09:54:31 crc kubenswrapper[4735]: I0128 09:54:31.742434 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262"} err="failed to get container status \"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262\": rpc error: code = NotFound desc = could not find container \"1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262\": container with ID starting with 1f497518296c5dcf9c2caf6c5e32b62acc5a12e635ef08b97693cee4061d2262 not found: ID does not exist" Jan 28 09:54:32 crc kubenswrapper[4735]: I0128 09:54:32.482109 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" event={"ID":"8c7e69b5-afe5-4020-a356-eea129781094","Type":"ContainerStarted","Data":"d67bab3fc566b33c3ccdaaccbe49a82a8f290e0dfbfec0df6ae9770e507a37dd"} Jan 28 09:54:32 crc kubenswrapper[4735]: I0128 09:54:32.482484 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" event={"ID":"8c7e69b5-afe5-4020-a356-eea129781094","Type":"ContainerStarted","Data":"39853b661872489c17b7e4cdd6077841d3cd380fd1fa4459201b59a178351a56"} Jan 28 09:54:32 crc kubenswrapper[4735]: I0128 09:54:32.482501 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" event={"ID":"8c7e69b5-afe5-4020-a356-eea129781094","Type":"ContainerStarted","Data":"f8585ca06bb38f67e686c9bc8be47858d5c461ae2ae916c041a752e378458010"} Jan 28 09:54:32 crc kubenswrapper[4735]: I0128 09:54:32.482512 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" event={"ID":"8c7e69b5-afe5-4020-a356-eea129781094","Type":"ContainerStarted","Data":"6d1b02c38904c044be52fc8a1b758c5e944df502c4252a45d9a86cfededc7763"} Jan 28 09:54:32 crc kubenswrapper[4735]: I0128 09:54:32.482523 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" event={"ID":"8c7e69b5-afe5-4020-a356-eea129781094","Type":"ContainerStarted","Data":"ad190f8612730836ae04aeabd381c72ed174a5adbbd6c638e42abc8017cfa9b6"} Jan 28 09:54:32 crc kubenswrapper[4735]: I0128 09:54:32.482534 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" event={"ID":"8c7e69b5-afe5-4020-a356-eea129781094","Type":"ContainerStarted","Data":"37cdd5b516f732795ede2398ec34b919b59fd849b8a232bb99c3b34c0bc7da0e"} Jan 28 09:54:33 crc kubenswrapper[4735]: I0128 09:54:33.511077 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a0aebf3-b579-4254-9f8c-7163984836f9" path="/var/lib/kubelet/pods/9a0aebf3-b579-4254-9f8c-7163984836f9/volumes" Jan 28 09:54:34 crc kubenswrapper[4735]: I0128 09:54:34.499944 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" event={"ID":"8c7e69b5-afe5-4020-a356-eea129781094","Type":"ContainerStarted","Data":"f2225f5bf303b612d67bdfc71e9b21bc2432554841aa1f7d44242b9e6ab7b9d4"} Jan 28 09:54:37 crc kubenswrapper[4735]: I0128 09:54:37.522149 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" event={"ID":"8c7e69b5-afe5-4020-a356-eea129781094","Type":"ContainerStarted","Data":"602e6d010fabc329cc59ac1b4f0d98b8c74baa66df2a31f23f98230f031b8f3e"} Jan 28 09:54:37 crc kubenswrapper[4735]: I0128 09:54:37.522707 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:37 crc kubenswrapper[4735]: I0128 09:54:37.577011 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:37 crc kubenswrapper[4735]: I0128 09:54:37.577724 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" podStartSLOduration=7.577699732 podStartE2EDuration="7.577699732s" podCreationTimestamp="2026-01-28 09:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:54:37.56367023 +0000 UTC m=+730.713334613" watchObservedRunningTime="2026-01-28 09:54:37.577699732 +0000 UTC m=+730.727364145" Jan 28 09:54:38 crc kubenswrapper[4735]: I0128 09:54:38.530263 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:38 crc kubenswrapper[4735]: I0128 09:54:38.530486 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:38 crc kubenswrapper[4735]: I0128 09:54:38.605409 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:54:43 crc kubenswrapper[4735]: I0128 09:54:43.503201 4735 scope.go:117] "RemoveContainer" containerID="3a9483e690e02a0014915fbf7e27f090be84b506983548028df8cf697a5112a6" Jan 28 09:54:44 crc kubenswrapper[4735]: I0128 09:54:44.576589 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jxbwx_c50beb21-41ab-474f-90d2-ba27152379cf/kube-multus/2.log" Jan 28 09:54:44 crc kubenswrapper[4735]: I0128 09:54:44.578159 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jxbwx_c50beb21-41ab-474f-90d2-ba27152379cf/kube-multus/1.log" Jan 28 09:54:44 crc kubenswrapper[4735]: I0128 09:54:44.578222 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jxbwx" event={"ID":"c50beb21-41ab-474f-90d2-ba27152379cf","Type":"ContainerStarted","Data":"7dbe27d88267d6d18db5bed82754581c94b53f9773f9c38b21d42f95919ca58c"} Jan 28 09:55:01 crc kubenswrapper[4735]: I0128 09:55:01.139428 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8ztfk" Jan 28 09:55:01 crc kubenswrapper[4735]: I0128 09:55:01.404263 4735 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 09:55:05 crc kubenswrapper[4735]: I0128 09:55:05.430227 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 09:55:05 crc kubenswrapper[4735]: I0128 09:55:05.431148 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 09:55:11 crc kubenswrapper[4735]: I0128 09:55:11.206311 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb"] Jan 28 09:55:11 crc kubenswrapper[4735]: I0128 09:55:11.207619 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" Jan 28 09:55:11 crc kubenswrapper[4735]: I0128 09:55:11.209438 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 09:55:11 crc kubenswrapper[4735]: I0128 09:55:11.224312 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb"] Jan 28 09:55:11 crc kubenswrapper[4735]: I0128 09:55:11.255583 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/03e770c2-1137-4e04-9775-a1187be4f65d-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb\" (UID: \"03e770c2-1137-4e04-9775-a1187be4f65d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" Jan 28 09:55:11 crc kubenswrapper[4735]: I0128 09:55:11.255666 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/03e770c2-1137-4e04-9775-a1187be4f65d-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb\" (UID: \"03e770c2-1137-4e04-9775-a1187be4f65d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" Jan 28 09:55:11 crc kubenswrapper[4735]: I0128 09:55:11.255782 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l7mn\" (UniqueName: \"kubernetes.io/projected/03e770c2-1137-4e04-9775-a1187be4f65d-kube-api-access-8l7mn\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb\" (UID: \"03e770c2-1137-4e04-9775-a1187be4f65d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" Jan 28 09:55:11 crc kubenswrapper[4735]: I0128 09:55:11.357122 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/03e770c2-1137-4e04-9775-a1187be4f65d-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb\" (UID: \"03e770c2-1137-4e04-9775-a1187be4f65d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" Jan 28 09:55:11 crc kubenswrapper[4735]: I0128 09:55:11.357171 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/03e770c2-1137-4e04-9775-a1187be4f65d-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb\" (UID: \"03e770c2-1137-4e04-9775-a1187be4f65d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" Jan 28 09:55:11 crc kubenswrapper[4735]: I0128 09:55:11.357222 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l7mn\" (UniqueName: \"kubernetes.io/projected/03e770c2-1137-4e04-9775-a1187be4f65d-kube-api-access-8l7mn\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb\" (UID: \"03e770c2-1137-4e04-9775-a1187be4f65d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" Jan 28 09:55:11 crc kubenswrapper[4735]: I0128 09:55:11.357634 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/03e770c2-1137-4e04-9775-a1187be4f65d-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb\" (UID: \"03e770c2-1137-4e04-9775-a1187be4f65d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" Jan 28 09:55:11 crc kubenswrapper[4735]: I0128 09:55:11.357940 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/03e770c2-1137-4e04-9775-a1187be4f65d-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb\" (UID: \"03e770c2-1137-4e04-9775-a1187be4f65d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" Jan 28 09:55:11 crc kubenswrapper[4735]: I0128 09:55:11.378856 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l7mn\" (UniqueName: \"kubernetes.io/projected/03e770c2-1137-4e04-9775-a1187be4f65d-kube-api-access-8l7mn\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb\" (UID: \"03e770c2-1137-4e04-9775-a1187be4f65d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" Jan 28 09:55:11 crc kubenswrapper[4735]: I0128 09:55:11.535028 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" Jan 28 09:55:11 crc kubenswrapper[4735]: I0128 09:55:11.950101 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb"] Jan 28 09:55:12 crc kubenswrapper[4735]: I0128 09:55:12.759520 4735 generic.go:334] "Generic (PLEG): container finished" podID="03e770c2-1137-4e04-9775-a1187be4f65d" containerID="c4cf0f52065f8c4e66475e6cdc9b205ab5d06a990a74325af32a0207c9045bf6" exitCode=0 Jan 28 09:55:12 crc kubenswrapper[4735]: I0128 09:55:12.759571 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" event={"ID":"03e770c2-1137-4e04-9775-a1187be4f65d","Type":"ContainerDied","Data":"c4cf0f52065f8c4e66475e6cdc9b205ab5d06a990a74325af32a0207c9045bf6"} Jan 28 09:55:12 crc kubenswrapper[4735]: I0128 09:55:12.759601 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" event={"ID":"03e770c2-1137-4e04-9775-a1187be4f65d","Type":"ContainerStarted","Data":"82114ff484d089c279658c1a8412489e0e7a95762e6cb4f06fbcf853bc44475c"} Jan 28 09:55:13 crc kubenswrapper[4735]: I0128 09:55:13.560256 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sfgcd"] Jan 28 09:55:13 crc kubenswrapper[4735]: I0128 09:55:13.561544 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfgcd" Jan 28 09:55:13 crc kubenswrapper[4735]: I0128 09:55:13.579883 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sfgcd"] Jan 28 09:55:13 crc kubenswrapper[4735]: I0128 09:55:13.586512 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t8tl\" (UniqueName: \"kubernetes.io/projected/47112b11-5b61-4d2c-8c6b-059b673f74c2-kube-api-access-8t8tl\") pod \"redhat-operators-sfgcd\" (UID: \"47112b11-5b61-4d2c-8c6b-059b673f74c2\") " pod="openshift-marketplace/redhat-operators-sfgcd" Jan 28 09:55:13 crc kubenswrapper[4735]: I0128 09:55:13.586615 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47112b11-5b61-4d2c-8c6b-059b673f74c2-catalog-content\") pod \"redhat-operators-sfgcd\" (UID: \"47112b11-5b61-4d2c-8c6b-059b673f74c2\") " pod="openshift-marketplace/redhat-operators-sfgcd" Jan 28 09:55:13 crc kubenswrapper[4735]: I0128 09:55:13.586978 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47112b11-5b61-4d2c-8c6b-059b673f74c2-utilities\") pod \"redhat-operators-sfgcd\" (UID: \"47112b11-5b61-4d2c-8c6b-059b673f74c2\") " pod="openshift-marketplace/redhat-operators-sfgcd" Jan 28 09:55:13 crc kubenswrapper[4735]: I0128 09:55:13.689007 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t8tl\" (UniqueName: \"kubernetes.io/projected/47112b11-5b61-4d2c-8c6b-059b673f74c2-kube-api-access-8t8tl\") pod \"redhat-operators-sfgcd\" (UID: \"47112b11-5b61-4d2c-8c6b-059b673f74c2\") " pod="openshift-marketplace/redhat-operators-sfgcd" Jan 28 09:55:13 crc kubenswrapper[4735]: I0128 09:55:13.689078 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47112b11-5b61-4d2c-8c6b-059b673f74c2-catalog-content\") pod \"redhat-operators-sfgcd\" (UID: \"47112b11-5b61-4d2c-8c6b-059b673f74c2\") " pod="openshift-marketplace/redhat-operators-sfgcd" Jan 28 09:55:13 crc kubenswrapper[4735]: I0128 09:55:13.689145 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47112b11-5b61-4d2c-8c6b-059b673f74c2-utilities\") pod \"redhat-operators-sfgcd\" (UID: \"47112b11-5b61-4d2c-8c6b-059b673f74c2\") " pod="openshift-marketplace/redhat-operators-sfgcd" Jan 28 09:55:13 crc kubenswrapper[4735]: I0128 09:55:13.689776 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47112b11-5b61-4d2c-8c6b-059b673f74c2-utilities\") pod \"redhat-operators-sfgcd\" (UID: \"47112b11-5b61-4d2c-8c6b-059b673f74c2\") " pod="openshift-marketplace/redhat-operators-sfgcd" Jan 28 09:55:13 crc kubenswrapper[4735]: I0128 09:55:13.689863 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47112b11-5b61-4d2c-8c6b-059b673f74c2-catalog-content\") pod \"redhat-operators-sfgcd\" (UID: \"47112b11-5b61-4d2c-8c6b-059b673f74c2\") " pod="openshift-marketplace/redhat-operators-sfgcd" Jan 28 09:55:13 crc kubenswrapper[4735]: I0128 09:55:13.721092 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t8tl\" (UniqueName: \"kubernetes.io/projected/47112b11-5b61-4d2c-8c6b-059b673f74c2-kube-api-access-8t8tl\") pod \"redhat-operators-sfgcd\" (UID: \"47112b11-5b61-4d2c-8c6b-059b673f74c2\") " pod="openshift-marketplace/redhat-operators-sfgcd" Jan 28 09:55:13 crc kubenswrapper[4735]: I0128 09:55:13.886637 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfgcd" Jan 28 09:55:14 crc kubenswrapper[4735]: I0128 09:55:14.214640 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sfgcd"] Jan 28 09:55:14 crc kubenswrapper[4735]: W0128 09:55:14.222595 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47112b11_5b61_4d2c_8c6b_059b673f74c2.slice/crio-da7ff6e512cee772eabe5d9d7ca6a90d9eec484519c766d12a132458960acd22 WatchSource:0}: Error finding container da7ff6e512cee772eabe5d9d7ca6a90d9eec484519c766d12a132458960acd22: Status 404 returned error can't find the container with id da7ff6e512cee772eabe5d9d7ca6a90d9eec484519c766d12a132458960acd22 Jan 28 09:55:14 crc kubenswrapper[4735]: I0128 09:55:14.770653 4735 generic.go:334] "Generic (PLEG): container finished" podID="47112b11-5b61-4d2c-8c6b-059b673f74c2" containerID="af5d30dff2f9609cd4d74e44375e4f9981f29b7b77fcac4729b0b659a9f0e904" exitCode=0 Jan 28 09:55:14 crc kubenswrapper[4735]: I0128 09:55:14.770716 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfgcd" event={"ID":"47112b11-5b61-4d2c-8c6b-059b673f74c2","Type":"ContainerDied","Data":"af5d30dff2f9609cd4d74e44375e4f9981f29b7b77fcac4729b0b659a9f0e904"} Jan 28 09:55:14 crc kubenswrapper[4735]: I0128 09:55:14.770822 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfgcd" event={"ID":"47112b11-5b61-4d2c-8c6b-059b673f74c2","Type":"ContainerStarted","Data":"da7ff6e512cee772eabe5d9d7ca6a90d9eec484519c766d12a132458960acd22"} Jan 28 09:55:14 crc kubenswrapper[4735]: I0128 09:55:14.772324 4735 generic.go:334] "Generic (PLEG): container finished" podID="03e770c2-1137-4e04-9775-a1187be4f65d" containerID="c70558c0f6183145ddb9acf152578d10a25201a6d4631ce618ed01256375972f" exitCode=0 Jan 28 09:55:14 crc kubenswrapper[4735]: I0128 09:55:14.772364 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" event={"ID":"03e770c2-1137-4e04-9775-a1187be4f65d","Type":"ContainerDied","Data":"c70558c0f6183145ddb9acf152578d10a25201a6d4631ce618ed01256375972f"} Jan 28 09:55:15 crc kubenswrapper[4735]: I0128 09:55:15.784437 4735 generic.go:334] "Generic (PLEG): container finished" podID="03e770c2-1137-4e04-9775-a1187be4f65d" containerID="b5aea466e0bea54845faf097d5e5f5f7c90a05bc58fa18af1385cd43ac0cca77" exitCode=0 Jan 28 09:55:15 crc kubenswrapper[4735]: I0128 09:55:15.784533 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" event={"ID":"03e770c2-1137-4e04-9775-a1187be4f65d","Type":"ContainerDied","Data":"b5aea466e0bea54845faf097d5e5f5f7c90a05bc58fa18af1385cd43ac0cca77"} Jan 28 09:55:17 crc kubenswrapper[4735]: I0128 09:55:17.032422 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" Jan 28 09:55:17 crc kubenswrapper[4735]: I0128 09:55:17.145692 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/03e770c2-1137-4e04-9775-a1187be4f65d-util\") pod \"03e770c2-1137-4e04-9775-a1187be4f65d\" (UID: \"03e770c2-1137-4e04-9775-a1187be4f65d\") " Jan 28 09:55:17 crc kubenswrapper[4735]: I0128 09:55:17.145825 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/03e770c2-1137-4e04-9775-a1187be4f65d-bundle\") pod \"03e770c2-1137-4e04-9775-a1187be4f65d\" (UID: \"03e770c2-1137-4e04-9775-a1187be4f65d\") " Jan 28 09:55:17 crc kubenswrapper[4735]: I0128 09:55:17.145853 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l7mn\" (UniqueName: \"kubernetes.io/projected/03e770c2-1137-4e04-9775-a1187be4f65d-kube-api-access-8l7mn\") pod \"03e770c2-1137-4e04-9775-a1187be4f65d\" (UID: \"03e770c2-1137-4e04-9775-a1187be4f65d\") " Jan 28 09:55:17 crc kubenswrapper[4735]: I0128 09:55:17.146624 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03e770c2-1137-4e04-9775-a1187be4f65d-bundle" (OuterVolumeSpecName: "bundle") pod "03e770c2-1137-4e04-9775-a1187be4f65d" (UID: "03e770c2-1137-4e04-9775-a1187be4f65d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:55:17 crc kubenswrapper[4735]: I0128 09:55:17.151789 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e770c2-1137-4e04-9775-a1187be4f65d-kube-api-access-8l7mn" (OuterVolumeSpecName: "kube-api-access-8l7mn") pod "03e770c2-1137-4e04-9775-a1187be4f65d" (UID: "03e770c2-1137-4e04-9775-a1187be4f65d"). InnerVolumeSpecName "kube-api-access-8l7mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:55:17 crc kubenswrapper[4735]: I0128 09:55:17.159822 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03e770c2-1137-4e04-9775-a1187be4f65d-util" (OuterVolumeSpecName: "util") pod "03e770c2-1137-4e04-9775-a1187be4f65d" (UID: "03e770c2-1137-4e04-9775-a1187be4f65d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:55:17 crc kubenswrapper[4735]: I0128 09:55:17.247615 4735 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/03e770c2-1137-4e04-9775-a1187be4f65d-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 09:55:17 crc kubenswrapper[4735]: I0128 09:55:17.247653 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l7mn\" (UniqueName: \"kubernetes.io/projected/03e770c2-1137-4e04-9775-a1187be4f65d-kube-api-access-8l7mn\") on node \"crc\" DevicePath \"\"" Jan 28 09:55:17 crc kubenswrapper[4735]: I0128 09:55:17.247666 4735 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/03e770c2-1137-4e04-9775-a1187be4f65d-util\") on node \"crc\" DevicePath \"\"" Jan 28 09:55:17 crc kubenswrapper[4735]: I0128 09:55:17.798815 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" Jan 28 09:55:17 crc kubenswrapper[4735]: I0128 09:55:17.798729 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb" event={"ID":"03e770c2-1137-4e04-9775-a1187be4f65d","Type":"ContainerDied","Data":"82114ff484d089c279658c1a8412489e0e7a95762e6cb4f06fbcf853bc44475c"} Jan 28 09:55:17 crc kubenswrapper[4735]: I0128 09:55:17.798915 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82114ff484d089c279658c1a8412489e0e7a95762e6cb4f06fbcf853bc44475c" Jan 28 09:55:20 crc kubenswrapper[4735]: I0128 09:55:20.244238 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-7nsxc"] Jan 28 09:55:20 crc kubenswrapper[4735]: E0128 09:55:20.244812 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e770c2-1137-4e04-9775-a1187be4f65d" containerName="extract" Jan 28 09:55:20 crc kubenswrapper[4735]: I0128 09:55:20.244828 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e770c2-1137-4e04-9775-a1187be4f65d" containerName="extract" Jan 28 09:55:20 crc kubenswrapper[4735]: E0128 09:55:20.244849 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e770c2-1137-4e04-9775-a1187be4f65d" containerName="pull" Jan 28 09:55:20 crc kubenswrapper[4735]: I0128 09:55:20.244856 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e770c2-1137-4e04-9775-a1187be4f65d" containerName="pull" Jan 28 09:55:20 crc kubenswrapper[4735]: E0128 09:55:20.244874 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e770c2-1137-4e04-9775-a1187be4f65d" containerName="util" Jan 28 09:55:20 crc kubenswrapper[4735]: I0128 09:55:20.244881 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e770c2-1137-4e04-9775-a1187be4f65d" containerName="util" Jan 28 09:55:20 crc kubenswrapper[4735]: I0128 09:55:20.244987 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="03e770c2-1137-4e04-9775-a1187be4f65d" containerName="extract" Jan 28 09:55:20 crc kubenswrapper[4735]: I0128 09:55:20.245449 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-7nsxc" Jan 28 09:55:20 crc kubenswrapper[4735]: I0128 09:55:20.249233 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 28 09:55:20 crc kubenswrapper[4735]: I0128 09:55:20.249454 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 28 09:55:20 crc kubenswrapper[4735]: I0128 09:55:20.249503 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-zqnxm" Jan 28 09:55:20 crc kubenswrapper[4735]: I0128 09:55:20.263110 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-7nsxc"] Jan 28 09:55:20 crc kubenswrapper[4735]: I0128 09:55:20.292888 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fstch\" (UniqueName: \"kubernetes.io/projected/d505ac14-ad3f-45fc-a07c-c10acaeb03d2-kube-api-access-fstch\") pod \"nmstate-operator-646758c888-7nsxc\" (UID: \"d505ac14-ad3f-45fc-a07c-c10acaeb03d2\") " pod="openshift-nmstate/nmstate-operator-646758c888-7nsxc" Jan 28 09:55:20 crc kubenswrapper[4735]: I0128 09:55:20.394191 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fstch\" (UniqueName: \"kubernetes.io/projected/d505ac14-ad3f-45fc-a07c-c10acaeb03d2-kube-api-access-fstch\") pod \"nmstate-operator-646758c888-7nsxc\" (UID: \"d505ac14-ad3f-45fc-a07c-c10acaeb03d2\") " pod="openshift-nmstate/nmstate-operator-646758c888-7nsxc" Jan 28 09:55:20 crc kubenswrapper[4735]: I0128 09:55:20.413255 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fstch\" (UniqueName: \"kubernetes.io/projected/d505ac14-ad3f-45fc-a07c-c10acaeb03d2-kube-api-access-fstch\") pod \"nmstate-operator-646758c888-7nsxc\" (UID: \"d505ac14-ad3f-45fc-a07c-c10acaeb03d2\") " pod="openshift-nmstate/nmstate-operator-646758c888-7nsxc" Jan 28 09:55:20 crc kubenswrapper[4735]: I0128 09:55:20.562433 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-7nsxc" Jan 28 09:55:23 crc kubenswrapper[4735]: I0128 09:55:23.024301 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-7nsxc"] Jan 28 09:55:23 crc kubenswrapper[4735]: W0128 09:55:23.040630 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd505ac14_ad3f_45fc_a07c_c10acaeb03d2.slice/crio-1d5254780307a010f09e469b858e89fd75d5decb6c0753dc6d5b6065708f95d9 WatchSource:0}: Error finding container 1d5254780307a010f09e469b858e89fd75d5decb6c0753dc6d5b6065708f95d9: Status 404 returned error can't find the container with id 1d5254780307a010f09e469b858e89fd75d5decb6c0753dc6d5b6065708f95d9 Jan 28 09:55:23 crc kubenswrapper[4735]: I0128 09:55:23.834226 4735 generic.go:334] "Generic (PLEG): container finished" podID="47112b11-5b61-4d2c-8c6b-059b673f74c2" containerID="567f1a430e47b31d6388ebe3a54fa7458c5ae13a970df0307f23c79ce876a489" exitCode=0 Jan 28 09:55:23 crc kubenswrapper[4735]: I0128 09:55:23.834302 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfgcd" event={"ID":"47112b11-5b61-4d2c-8c6b-059b673f74c2","Type":"ContainerDied","Data":"567f1a430e47b31d6388ebe3a54fa7458c5ae13a970df0307f23c79ce876a489"} Jan 28 09:55:23 crc kubenswrapper[4735]: I0128 09:55:23.836081 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-7nsxc" event={"ID":"d505ac14-ad3f-45fc-a07c-c10acaeb03d2","Type":"ContainerStarted","Data":"1d5254780307a010f09e469b858e89fd75d5decb6c0753dc6d5b6065708f95d9"} Jan 28 09:55:24 crc kubenswrapper[4735]: I0128 09:55:24.843427 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfgcd" event={"ID":"47112b11-5b61-4d2c-8c6b-059b673f74c2","Type":"ContainerStarted","Data":"1d60f8e42c34d582477e1db4770742a7872d0c975451dc1b46503a4a620bed1b"} Jan 28 09:55:24 crc kubenswrapper[4735]: I0128 09:55:24.857243 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sfgcd" podStartSLOduration=2.282460778 podStartE2EDuration="11.85722364s" podCreationTimestamp="2026-01-28 09:55:13 +0000 UTC" firstStartedPulling="2026-01-28 09:55:14.77209656 +0000 UTC m=+767.921760933" lastFinishedPulling="2026-01-28 09:55:24.346859422 +0000 UTC m=+777.496523795" observedRunningTime="2026-01-28 09:55:24.856989005 +0000 UTC m=+778.006653378" watchObservedRunningTime="2026-01-28 09:55:24.85722364 +0000 UTC m=+778.006888013" Jan 28 09:55:25 crc kubenswrapper[4735]: I0128 09:55:25.854439 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-7nsxc" event={"ID":"d505ac14-ad3f-45fc-a07c-c10acaeb03d2","Type":"ContainerStarted","Data":"94a35f8ec2270da3324253109916e6db5cfbe33135fc36df84454f458497a125"} Jan 28 09:55:25 crc kubenswrapper[4735]: I0128 09:55:25.873224 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-7nsxc" podStartSLOduration=3.957501456 podStartE2EDuration="5.873206488s" podCreationTimestamp="2026-01-28 09:55:20 +0000 UTC" firstStartedPulling="2026-01-28 09:55:23.043674692 +0000 UTC m=+776.193339065" lastFinishedPulling="2026-01-28 09:55:24.959379724 +0000 UTC m=+778.109044097" observedRunningTime="2026-01-28 09:55:25.871015895 +0000 UTC m=+779.020680308" watchObservedRunningTime="2026-01-28 09:55:25.873206488 +0000 UTC m=+779.022870871" Jan 28 09:55:27 crc kubenswrapper[4735]: I0128 09:55:27.853410 4735 scope.go:117] "RemoveContainer" containerID="c6c98ea668fd6d902a3e7435fed58b93cc58a04bb99900782a0e943bc253a42d" Jan 28 09:55:30 crc kubenswrapper[4735]: I0128 09:55:30.885083 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jxbwx_c50beb21-41ab-474f-90d2-ba27152379cf/kube-multus/2.log" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.024878 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sbpw9"] Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.026347 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbpw9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.033810 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sbpw9"] Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.122545 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn76g\" (UniqueName: \"kubernetes.io/projected/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-kube-api-access-wn76g\") pod \"certified-operators-sbpw9\" (UID: \"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b\") " pod="openshift-marketplace/certified-operators-sbpw9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.122598 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-utilities\") pod \"certified-operators-sbpw9\" (UID: \"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b\") " pod="openshift-marketplace/certified-operators-sbpw9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.122628 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-catalog-content\") pod \"certified-operators-sbpw9\" (UID: \"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b\") " pod="openshift-marketplace/certified-operators-sbpw9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.223566 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-utilities\") pod \"certified-operators-sbpw9\" (UID: \"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b\") " pod="openshift-marketplace/certified-operators-sbpw9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.224042 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-catalog-content\") pod \"certified-operators-sbpw9\" (UID: \"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b\") " pod="openshift-marketplace/certified-operators-sbpw9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.224164 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn76g\" (UniqueName: \"kubernetes.io/projected/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-kube-api-access-wn76g\") pod \"certified-operators-sbpw9\" (UID: \"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b\") " pod="openshift-marketplace/certified-operators-sbpw9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.224392 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-utilities\") pod \"certified-operators-sbpw9\" (UID: \"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b\") " pod="openshift-marketplace/certified-operators-sbpw9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.224685 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-catalog-content\") pod \"certified-operators-sbpw9\" (UID: \"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b\") " pod="openshift-marketplace/certified-operators-sbpw9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.244943 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn76g\" (UniqueName: \"kubernetes.io/projected/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-kube-api-access-wn76g\") pod \"certified-operators-sbpw9\" (UID: \"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b\") " pod="openshift-marketplace/certified-operators-sbpw9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.319502 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-v9d5m"] Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.320526 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-v9d5m" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.323276 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-78gj5" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.333206 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-v9d5m"] Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.337967 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-h28tw"] Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.339051 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h28tw" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.340431 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.344187 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbpw9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.405924 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-7pft9"] Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.406737 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7pft9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.426680 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-h28tw"] Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.427205 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8d7d44ef-aa10-45a5-9ee3-a61c6178fb52-dbus-socket\") pod \"nmstate-handler-7pft9\" (UID: \"8d7d44ef-aa10-45a5-9ee3-a61c6178fb52\") " pod="openshift-nmstate/nmstate-handler-7pft9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.427244 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9mbs\" (UniqueName: \"kubernetes.io/projected/8d7d44ef-aa10-45a5-9ee3-a61c6178fb52-kube-api-access-f9mbs\") pod \"nmstate-handler-7pft9\" (UID: \"8d7d44ef-aa10-45a5-9ee3-a61c6178fb52\") " pod="openshift-nmstate/nmstate-handler-7pft9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.427265 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8d7d44ef-aa10-45a5-9ee3-a61c6178fb52-ovs-socket\") pod \"nmstate-handler-7pft9\" (UID: \"8d7d44ef-aa10-45a5-9ee3-a61c6178fb52\") " pod="openshift-nmstate/nmstate-handler-7pft9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.427287 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdzgw\" (UniqueName: \"kubernetes.io/projected/19bcd978-0715-474c-8abf-6028efe34478-kube-api-access-wdzgw\") pod \"nmstate-webhook-8474b5b9d8-h28tw\" (UID: \"19bcd978-0715-474c-8abf-6028efe34478\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h28tw" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.427309 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqh2l\" (UniqueName: \"kubernetes.io/projected/855f8b5d-befc-4d5d-9653-c9820ca8eafd-kube-api-access-rqh2l\") pod \"nmstate-metrics-54757c584b-v9d5m\" (UID: \"855f8b5d-befc-4d5d-9653-c9820ca8eafd\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-v9d5m" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.427338 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/19bcd978-0715-474c-8abf-6028efe34478-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-h28tw\" (UID: \"19bcd978-0715-474c-8abf-6028efe34478\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h28tw" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.427387 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8d7d44ef-aa10-45a5-9ee3-a61c6178fb52-nmstate-lock\") pod \"nmstate-handler-7pft9\" (UID: \"8d7d44ef-aa10-45a5-9ee3-a61c6178fb52\") " pod="openshift-nmstate/nmstate-handler-7pft9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.549727 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8d7d44ef-aa10-45a5-9ee3-a61c6178fb52-dbus-socket\") pod \"nmstate-handler-7pft9\" (UID: \"8d7d44ef-aa10-45a5-9ee3-a61c6178fb52\") " pod="openshift-nmstate/nmstate-handler-7pft9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.549830 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9mbs\" (UniqueName: \"kubernetes.io/projected/8d7d44ef-aa10-45a5-9ee3-a61c6178fb52-kube-api-access-f9mbs\") pod \"nmstate-handler-7pft9\" (UID: \"8d7d44ef-aa10-45a5-9ee3-a61c6178fb52\") " pod="openshift-nmstate/nmstate-handler-7pft9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.549860 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8d7d44ef-aa10-45a5-9ee3-a61c6178fb52-ovs-socket\") pod \"nmstate-handler-7pft9\" (UID: \"8d7d44ef-aa10-45a5-9ee3-a61c6178fb52\") " pod="openshift-nmstate/nmstate-handler-7pft9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.549886 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdzgw\" (UniqueName: \"kubernetes.io/projected/19bcd978-0715-474c-8abf-6028efe34478-kube-api-access-wdzgw\") pod \"nmstate-webhook-8474b5b9d8-h28tw\" (UID: \"19bcd978-0715-474c-8abf-6028efe34478\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h28tw" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.549915 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqh2l\" (UniqueName: \"kubernetes.io/projected/855f8b5d-befc-4d5d-9653-c9820ca8eafd-kube-api-access-rqh2l\") pod \"nmstate-metrics-54757c584b-v9d5m\" (UID: \"855f8b5d-befc-4d5d-9653-c9820ca8eafd\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-v9d5m" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.549938 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/19bcd978-0715-474c-8abf-6028efe34478-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-h28tw\" (UID: \"19bcd978-0715-474c-8abf-6028efe34478\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h28tw" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.550288 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8d7d44ef-aa10-45a5-9ee3-a61c6178fb52-nmstate-lock\") pod \"nmstate-handler-7pft9\" (UID: \"8d7d44ef-aa10-45a5-9ee3-a61c6178fb52\") " pod="openshift-nmstate/nmstate-handler-7pft9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.550906 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8d7d44ef-aa10-45a5-9ee3-a61c6178fb52-ovs-socket\") pod \"nmstate-handler-7pft9\" (UID: \"8d7d44ef-aa10-45a5-9ee3-a61c6178fb52\") " pod="openshift-nmstate/nmstate-handler-7pft9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.551239 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8d7d44ef-aa10-45a5-9ee3-a61c6178fb52-nmstate-lock\") pod \"nmstate-handler-7pft9\" (UID: \"8d7d44ef-aa10-45a5-9ee3-a61c6178fb52\") " pod="openshift-nmstate/nmstate-handler-7pft9" Jan 28 09:55:31 crc kubenswrapper[4735]: E0128 09:55:31.551630 4735 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 28 09:55:31 crc kubenswrapper[4735]: E0128 09:55:31.551677 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19bcd978-0715-474c-8abf-6028efe34478-tls-key-pair podName:19bcd978-0715-474c-8abf-6028efe34478 nodeName:}" failed. No retries permitted until 2026-01-28 09:55:32.051662516 +0000 UTC m=+785.201326879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/19bcd978-0715-474c-8abf-6028efe34478-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-h28tw" (UID: "19bcd978-0715-474c-8abf-6028efe34478") : secret "openshift-nmstate-webhook" not found Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.551821 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8d7d44ef-aa10-45a5-9ee3-a61c6178fb52-dbus-socket\") pod \"nmstate-handler-7pft9\" (UID: \"8d7d44ef-aa10-45a5-9ee3-a61c6178fb52\") " pod="openshift-nmstate/nmstate-handler-7pft9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.569602 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7"] Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.570201 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7"] Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.570286 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.576997 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqh2l\" (UniqueName: \"kubernetes.io/projected/855f8b5d-befc-4d5d-9653-c9820ca8eafd-kube-api-access-rqh2l\") pod \"nmstate-metrics-54757c584b-v9d5m\" (UID: \"855f8b5d-befc-4d5d-9653-c9820ca8eafd\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-v9d5m" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.577078 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdzgw\" (UniqueName: \"kubernetes.io/projected/19bcd978-0715-474c-8abf-6028efe34478-kube-api-access-wdzgw\") pod \"nmstate-webhook-8474b5b9d8-h28tw\" (UID: \"19bcd978-0715-474c-8abf-6028efe34478\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h28tw" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.577217 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.577296 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.577393 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-2tj5d" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.597679 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9mbs\" (UniqueName: \"kubernetes.io/projected/8d7d44ef-aa10-45a5-9ee3-a61c6178fb52-kube-api-access-f9mbs\") pod \"nmstate-handler-7pft9\" (UID: \"8d7d44ef-aa10-45a5-9ee3-a61c6178fb52\") " pod="openshift-nmstate/nmstate-handler-7pft9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.649303 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-v9d5m" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.671959 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5a554582-3291-4541-919b-b58a23e818df-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-grbj7\" (UID: \"5a554582-3291-4541-919b-b58a23e818df\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.672045 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgc8z\" (UniqueName: \"kubernetes.io/projected/5a554582-3291-4541-919b-b58a23e818df-kube-api-access-bgc8z\") pod \"nmstate-console-plugin-7754f76f8b-grbj7\" (UID: \"5a554582-3291-4541-919b-b58a23e818df\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.672171 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a554582-3291-4541-919b-b58a23e818df-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-grbj7\" (UID: \"5a554582-3291-4541-919b-b58a23e818df\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.748892 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5f7d7bbfcb-wmmqq"] Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.749561 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.760911 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7pft9" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.764015 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5f7d7bbfcb-wmmqq"] Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.813278 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgc8z\" (UniqueName: \"kubernetes.io/projected/5a554582-3291-4541-919b-b58a23e818df-kube-api-access-bgc8z\") pod \"nmstate-console-plugin-7754f76f8b-grbj7\" (UID: \"5a554582-3291-4541-919b-b58a23e818df\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.813311 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqr8d\" (UniqueName: \"kubernetes.io/projected/277dbd93-5a86-4b42-a32a-af3d152a02c5-kube-api-access-vqr8d\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.813346 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/277dbd93-5a86-4b42-a32a-af3d152a02c5-console-config\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.813532 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/277dbd93-5a86-4b42-a32a-af3d152a02c5-console-serving-cert\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.813628 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/277dbd93-5a86-4b42-a32a-af3d152a02c5-service-ca\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.813676 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/277dbd93-5a86-4b42-a32a-af3d152a02c5-console-oauth-config\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.813787 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a554582-3291-4541-919b-b58a23e818df-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-grbj7\" (UID: \"5a554582-3291-4541-919b-b58a23e818df\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.813835 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/277dbd93-5a86-4b42-a32a-af3d152a02c5-oauth-serving-cert\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.813888 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/277dbd93-5a86-4b42-a32a-af3d152a02c5-trusted-ca-bundle\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.813948 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5a554582-3291-4541-919b-b58a23e818df-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-grbj7\" (UID: \"5a554582-3291-4541-919b-b58a23e818df\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7" Jan 28 09:55:31 crc kubenswrapper[4735]: E0128 09:55:31.813952 4735 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 28 09:55:31 crc kubenswrapper[4735]: E0128 09:55:31.814095 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a554582-3291-4541-919b-b58a23e818df-plugin-serving-cert podName:5a554582-3291-4541-919b-b58a23e818df nodeName:}" failed. No retries permitted until 2026-01-28 09:55:32.314070435 +0000 UTC m=+785.463734818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/5a554582-3291-4541-919b-b58a23e818df-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-grbj7" (UID: "5a554582-3291-4541-919b-b58a23e818df") : secret "plugin-serving-cert" not found Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.815301 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5a554582-3291-4541-919b-b58a23e818df-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-grbj7\" (UID: \"5a554582-3291-4541-919b-b58a23e818df\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.839373 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgc8z\" (UniqueName: \"kubernetes.io/projected/5a554582-3291-4541-919b-b58a23e818df-kube-api-access-bgc8z\") pod \"nmstate-console-plugin-7754f76f8b-grbj7\" (UID: \"5a554582-3291-4541-919b-b58a23e818df\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.891784 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7pft9" event={"ID":"8d7d44ef-aa10-45a5-9ee3-a61c6178fb52","Type":"ContainerStarted","Data":"ad00548ffb789f62d04239054a9b2d61025d8173750376752e6df06dceaa66a6"} Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.916511 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/277dbd93-5a86-4b42-a32a-af3d152a02c5-console-serving-cert\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.916558 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/277dbd93-5a86-4b42-a32a-af3d152a02c5-service-ca\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.916579 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/277dbd93-5a86-4b42-a32a-af3d152a02c5-console-oauth-config\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.916617 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/277dbd93-5a86-4b42-a32a-af3d152a02c5-oauth-serving-cert\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.916639 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/277dbd93-5a86-4b42-a32a-af3d152a02c5-trusted-ca-bundle\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.916675 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqr8d\" (UniqueName: \"kubernetes.io/projected/277dbd93-5a86-4b42-a32a-af3d152a02c5-kube-api-access-vqr8d\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.916701 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/277dbd93-5a86-4b42-a32a-af3d152a02c5-console-config\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.917510 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/277dbd93-5a86-4b42-a32a-af3d152a02c5-console-config\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.918346 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/277dbd93-5a86-4b42-a32a-af3d152a02c5-oauth-serving-cert\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.918877 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/277dbd93-5a86-4b42-a32a-af3d152a02c5-service-ca\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.919939 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/277dbd93-5a86-4b42-a32a-af3d152a02c5-trusted-ca-bundle\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.920807 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/277dbd93-5a86-4b42-a32a-af3d152a02c5-console-serving-cert\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.923876 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/277dbd93-5a86-4b42-a32a-af3d152a02c5-console-oauth-config\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.926282 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sbpw9"] Jan 28 09:55:31 crc kubenswrapper[4735]: I0128 09:55:31.946412 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqr8d\" (UniqueName: \"kubernetes.io/projected/277dbd93-5a86-4b42-a32a-af3d152a02c5-kube-api-access-vqr8d\") pod \"console-5f7d7bbfcb-wmmqq\" (UID: \"277dbd93-5a86-4b42-a32a-af3d152a02c5\") " pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.074019 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-v9d5m"] Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.115602 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.118741 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/19bcd978-0715-474c-8abf-6028efe34478-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-h28tw\" (UID: \"19bcd978-0715-474c-8abf-6028efe34478\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h28tw" Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.123313 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/19bcd978-0715-474c-8abf-6028efe34478-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-h28tw\" (UID: \"19bcd978-0715-474c-8abf-6028efe34478\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h28tw" Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.280567 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5f7d7bbfcb-wmmqq"] Jan 28 09:55:32 crc kubenswrapper[4735]: W0128 09:55:32.289571 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod277dbd93_5a86_4b42_a32a_af3d152a02c5.slice/crio-1419eeb26dc5bb0955afafac2b1356f2b208df9330400d6a42f684f9996c6eac WatchSource:0}: Error finding container 1419eeb26dc5bb0955afafac2b1356f2b208df9330400d6a42f684f9996c6eac: Status 404 returned error can't find the container with id 1419eeb26dc5bb0955afafac2b1356f2b208df9330400d6a42f684f9996c6eac Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.313724 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h28tw" Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.320832 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a554582-3291-4541-919b-b58a23e818df-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-grbj7\" (UID: \"5a554582-3291-4541-919b-b58a23e818df\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7" Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.324596 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a554582-3291-4541-919b-b58a23e818df-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-grbj7\" (UID: \"5a554582-3291-4541-919b-b58a23e818df\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7" Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.490459 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-h28tw"] Jan 28 09:55:32 crc kubenswrapper[4735]: W0128 09:55:32.499050 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19bcd978_0715_474c_8abf_6028efe34478.slice/crio-9c7b4b674e691ed32c8ca16ae304cb60a876b6c51a5c5043f94afe17b24246b9 WatchSource:0}: Error finding container 9c7b4b674e691ed32c8ca16ae304cb60a876b6c51a5c5043f94afe17b24246b9: Status 404 returned error can't find the container with id 9c7b4b674e691ed32c8ca16ae304cb60a876b6c51a5c5043f94afe17b24246b9 Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.540149 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7" Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.718720 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7"] Jan 28 09:55:32 crc kubenswrapper[4735]: W0128 09:55:32.721824 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a554582_3291_4541_919b_b58a23e818df.slice/crio-37e3ae39190d85abfa0e41039c4426caa0a4bd622df7a21bf9024123623b1b3a WatchSource:0}: Error finding container 37e3ae39190d85abfa0e41039c4426caa0a4bd622df7a21bf9024123623b1b3a: Status 404 returned error can't find the container with id 37e3ae39190d85abfa0e41039c4426caa0a4bd622df7a21bf9024123623b1b3a Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.899671 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h28tw" event={"ID":"19bcd978-0715-474c-8abf-6028efe34478","Type":"ContainerStarted","Data":"9c7b4b674e691ed32c8ca16ae304cb60a876b6c51a5c5043f94afe17b24246b9"} Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.901265 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-v9d5m" event={"ID":"855f8b5d-befc-4d5d-9653-c9820ca8eafd","Type":"ContainerStarted","Data":"8ae448730118aac169f8271ba3f0f082fb2b70292ec487ef73d068639f536767"} Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.902802 4735 generic.go:334] "Generic (PLEG): container finished" podID="d5d3d6bc-e91f-480a-a19c-46fa029c7e4b" containerID="b23eb4f578e54997fb1953defffe25ac4ed2b27393eb3dd18db058c22a603a9f" exitCode=0 Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.902912 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbpw9" event={"ID":"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b","Type":"ContainerDied","Data":"b23eb4f578e54997fb1953defffe25ac4ed2b27393eb3dd18db058c22a603a9f"} Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.902942 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbpw9" event={"ID":"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b","Type":"ContainerStarted","Data":"c4455012e83220967f2e47a264c0e18b600e6d1c85ac61dacd3e259c3332148b"} Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.905086 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f7d7bbfcb-wmmqq" event={"ID":"277dbd93-5a86-4b42-a32a-af3d152a02c5","Type":"ContainerStarted","Data":"15ae4f09ed7bcadf3650a8a64b180569bb9273f6c4c3b9dc2b71fc75442732fb"} Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.905213 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f7d7bbfcb-wmmqq" event={"ID":"277dbd93-5a86-4b42-a32a-af3d152a02c5","Type":"ContainerStarted","Data":"1419eeb26dc5bb0955afafac2b1356f2b208df9330400d6a42f684f9996c6eac"} Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.906173 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7" event={"ID":"5a554582-3291-4541-919b-b58a23e818df","Type":"ContainerStarted","Data":"37e3ae39190d85abfa0e41039c4426caa0a4bd622df7a21bf9024123623b1b3a"} Jan 28 09:55:32 crc kubenswrapper[4735]: I0128 09:55:32.937865 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5f7d7bbfcb-wmmqq" podStartSLOduration=1.937839316 podStartE2EDuration="1.937839316s" podCreationTimestamp="2026-01-28 09:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:55:32.93478346 +0000 UTC m=+786.084447843" watchObservedRunningTime="2026-01-28 09:55:32.937839316 +0000 UTC m=+786.087503689" Jan 28 09:55:33 crc kubenswrapper[4735]: I0128 09:55:33.887037 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sfgcd" Jan 28 09:55:33 crc kubenswrapper[4735]: I0128 09:55:33.887123 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sfgcd" Jan 28 09:55:33 crc kubenswrapper[4735]: I0128 09:55:33.930327 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sfgcd" Jan 28 09:55:33 crc kubenswrapper[4735]: I0128 09:55:33.981433 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sfgcd" Jan 28 09:55:34 crc kubenswrapper[4735]: I0128 09:55:34.920964 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7pft9" event={"ID":"8d7d44ef-aa10-45a5-9ee3-a61c6178fb52","Type":"ContainerStarted","Data":"8d4b36a1a9166c0b66f70b78ae79017c53f279feb5e961f1354e4d9ba9c3f126"} Jan 28 09:55:34 crc kubenswrapper[4735]: I0128 09:55:34.921860 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-7pft9" Jan 28 09:55:34 crc kubenswrapper[4735]: I0128 09:55:34.923891 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h28tw" event={"ID":"19bcd978-0715-474c-8abf-6028efe34478","Type":"ContainerStarted","Data":"6454ecda26b070482f6b34ad64093f33e5ea57ecaeb5f5b44e275cb01f4c6c85"} Jan 28 09:55:34 crc kubenswrapper[4735]: I0128 09:55:34.924250 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h28tw" Jan 28 09:55:34 crc kubenswrapper[4735]: I0128 09:55:34.926182 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-v9d5m" event={"ID":"855f8b5d-befc-4d5d-9653-c9820ca8eafd","Type":"ContainerStarted","Data":"54beb09bd31cf52dad38e80974d6f63d3006e99dee6bc576506eaf656b8ec37a"} Jan 28 09:55:34 crc kubenswrapper[4735]: I0128 09:55:34.928226 4735 generic.go:334] "Generic (PLEG): container finished" podID="d5d3d6bc-e91f-480a-a19c-46fa029c7e4b" containerID="d902888eb622005a2e3d68f6695210eac7d5e659f4adbb22bb1d4e00a616d50e" exitCode=0 Jan 28 09:55:34 crc kubenswrapper[4735]: I0128 09:55:34.928277 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbpw9" event={"ID":"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b","Type":"ContainerDied","Data":"d902888eb622005a2e3d68f6695210eac7d5e659f4adbb22bb1d4e00a616d50e"} Jan 28 09:55:34 crc kubenswrapper[4735]: I0128 09:55:34.939680 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-7pft9" podStartSLOduration=1.44248821 podStartE2EDuration="3.939652325s" podCreationTimestamp="2026-01-28 09:55:31 +0000 UTC" firstStartedPulling="2026-01-28 09:55:31.815424847 +0000 UTC m=+784.965089220" lastFinishedPulling="2026-01-28 09:55:34.312588962 +0000 UTC m=+787.462253335" observedRunningTime="2026-01-28 09:55:34.939262316 +0000 UTC m=+788.088926699" watchObservedRunningTime="2026-01-28 09:55:34.939652325 +0000 UTC m=+788.089316728" Jan 28 09:55:35 crc kubenswrapper[4735]: I0128 09:55:35.427212 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h28tw" podStartSLOduration=2.612925551 podStartE2EDuration="4.427191137s" podCreationTimestamp="2026-01-28 09:55:31 +0000 UTC" firstStartedPulling="2026-01-28 09:55:32.500438678 +0000 UTC m=+785.650103041" lastFinishedPulling="2026-01-28 09:55:34.314704254 +0000 UTC m=+787.464368627" observedRunningTime="2026-01-28 09:55:34.975511314 +0000 UTC m=+788.125175717" watchObservedRunningTime="2026-01-28 09:55:35.427191137 +0000 UTC m=+788.576855520" Jan 28 09:55:35 crc kubenswrapper[4735]: I0128 09:55:35.429891 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sfgcd"] Jan 28 09:55:35 crc kubenswrapper[4735]: I0128 09:55:35.431411 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 09:55:35 crc kubenswrapper[4735]: I0128 09:55:35.431469 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 09:55:35 crc kubenswrapper[4735]: I0128 09:55:35.803960 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hkm7n"] Jan 28 09:55:35 crc kubenswrapper[4735]: I0128 09:55:35.805925 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hkm7n" podUID="9dbc89ca-3e14-4e8d-ad36-684f3fd5061f" containerName="registry-server" containerID="cri-o://5a0dbc37fd00dbee807425424b81f0bffe04cdf84684d3fa89af274f28876692" gracePeriod=2 Jan 28 09:55:35 crc kubenswrapper[4735]: I0128 09:55:35.943634 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7" event={"ID":"5a554582-3291-4541-919b-b58a23e818df","Type":"ContainerStarted","Data":"5225124841bc7fd34b0fa99903ef11e3a9276ff66ffd53ac2289f56b2fea7c65"} Jan 28 09:55:35 crc kubenswrapper[4735]: I0128 09:55:35.963096 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grbj7" podStartSLOduration=2.365848982 podStartE2EDuration="4.963070278s" podCreationTimestamp="2026-01-28 09:55:31 +0000 UTC" firstStartedPulling="2026-01-28 09:55:32.723382387 +0000 UTC m=+785.873046760" lastFinishedPulling="2026-01-28 09:55:35.320603683 +0000 UTC m=+788.470268056" observedRunningTime="2026-01-28 09:55:35.960108054 +0000 UTC m=+789.109772447" watchObservedRunningTime="2026-01-28 09:55:35.963070278 +0000 UTC m=+789.112734651" Jan 28 09:55:36 crc kubenswrapper[4735]: I0128 09:55:36.181710 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkm7n" Jan 28 09:55:36 crc kubenswrapper[4735]: I0128 09:55:36.279429 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-utilities\") pod \"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f\" (UID: \"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f\") " Jan 28 09:55:36 crc kubenswrapper[4735]: I0128 09:55:36.279978 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm5cx\" (UniqueName: \"kubernetes.io/projected/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-kube-api-access-zm5cx\") pod \"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f\" (UID: \"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f\") " Jan 28 09:55:36 crc kubenswrapper[4735]: I0128 09:55:36.280068 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-catalog-content\") pod \"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f\" (UID: \"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f\") " Jan 28 09:55:36 crc kubenswrapper[4735]: I0128 09:55:36.280701 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-utilities" (OuterVolumeSpecName: "utilities") pod "9dbc89ca-3e14-4e8d-ad36-684f3fd5061f" (UID: "9dbc89ca-3e14-4e8d-ad36-684f3fd5061f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:55:36 crc kubenswrapper[4735]: I0128 09:55:36.288923 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-kube-api-access-zm5cx" (OuterVolumeSpecName: "kube-api-access-zm5cx") pod "9dbc89ca-3e14-4e8d-ad36-684f3fd5061f" (UID: "9dbc89ca-3e14-4e8d-ad36-684f3fd5061f"). InnerVolumeSpecName "kube-api-access-zm5cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:55:36 crc kubenswrapper[4735]: I0128 09:55:36.381898 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 09:55:36 crc kubenswrapper[4735]: I0128 09:55:36.381932 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm5cx\" (UniqueName: \"kubernetes.io/projected/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-kube-api-access-zm5cx\") on node \"crc\" DevicePath \"\"" Jan 28 09:55:36 crc kubenswrapper[4735]: I0128 09:55:36.406712 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9dbc89ca-3e14-4e8d-ad36-684f3fd5061f" (UID: "9dbc89ca-3e14-4e8d-ad36-684f3fd5061f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:55:36 crc kubenswrapper[4735]: I0128 09:55:36.485427 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 09:55:36 crc kubenswrapper[4735]: I0128 09:55:36.957157 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbpw9" event={"ID":"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b","Type":"ContainerStarted","Data":"e1c9a834b3ce731eb053c922e1f61f98a0d88115a444fa1167e934f14f70215d"} Jan 28 09:55:36 crc kubenswrapper[4735]: I0128 09:55:36.961373 4735 generic.go:334] "Generic (PLEG): container finished" podID="9dbc89ca-3e14-4e8d-ad36-684f3fd5061f" containerID="5a0dbc37fd00dbee807425424b81f0bffe04cdf84684d3fa89af274f28876692" exitCode=0 Jan 28 09:55:36 crc kubenswrapper[4735]: I0128 09:55:36.961428 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hkm7n" Jan 28 09:55:36 crc kubenswrapper[4735]: I0128 09:55:36.961493 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkm7n" event={"ID":"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f","Type":"ContainerDied","Data":"5a0dbc37fd00dbee807425424b81f0bffe04cdf84684d3fa89af274f28876692"} Jan 28 09:55:36 crc kubenswrapper[4735]: I0128 09:55:36.961518 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hkm7n" event={"ID":"9dbc89ca-3e14-4e8d-ad36-684f3fd5061f","Type":"ContainerDied","Data":"d3fa1fe0fe03ad2655b8af6bc355fb08090cba9d04b9ed26cf3db9bfba74822d"} Jan 28 09:55:36 crc kubenswrapper[4735]: I0128 09:55:36.961536 4735 scope.go:117] "RemoveContainer" containerID="5a0dbc37fd00dbee807425424b81f0bffe04cdf84684d3fa89af274f28876692" Jan 28 09:55:36 crc kubenswrapper[4735]: I0128 09:55:36.979018 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sbpw9" podStartSLOduration=2.739828986 podStartE2EDuration="5.978998534s" podCreationTimestamp="2026-01-28 09:55:31 +0000 UTC" firstStartedPulling="2026-01-28 09:55:32.904505129 +0000 UTC m=+786.054169502" lastFinishedPulling="2026-01-28 09:55:36.143674677 +0000 UTC m=+789.293339050" observedRunningTime="2026-01-28 09:55:36.976888442 +0000 UTC m=+790.126552845" watchObservedRunningTime="2026-01-28 09:55:36.978998534 +0000 UTC m=+790.128662907" Jan 28 09:55:37 crc kubenswrapper[4735]: I0128 09:55:37.010229 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hkm7n"] Jan 28 09:55:37 crc kubenswrapper[4735]: I0128 09:55:37.017568 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hkm7n"] Jan 28 09:55:37 crc kubenswrapper[4735]: I0128 09:55:37.456382 4735 scope.go:117] "RemoveContainer" containerID="6a885af02a678eb954dc508dd6cac7789432ecbbae4129e2cb5830987743508d" Jan 28 09:55:37 crc kubenswrapper[4735]: I0128 09:55:37.508943 4735 scope.go:117] "RemoveContainer" containerID="559eb35f659a99c8d14fbb035dbabdb6a6c41123eef68335b55d452ec5d3290e" Jan 28 09:55:37 crc kubenswrapper[4735]: I0128 09:55:37.511045 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dbc89ca-3e14-4e8d-ad36-684f3fd5061f" path="/var/lib/kubelet/pods/9dbc89ca-3e14-4e8d-ad36-684f3fd5061f/volumes" Jan 28 09:55:37 crc kubenswrapper[4735]: I0128 09:55:37.523654 4735 scope.go:117] "RemoveContainer" containerID="5a0dbc37fd00dbee807425424b81f0bffe04cdf84684d3fa89af274f28876692" Jan 28 09:55:37 crc kubenswrapper[4735]: E0128 09:55:37.524423 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a0dbc37fd00dbee807425424b81f0bffe04cdf84684d3fa89af274f28876692\": container with ID starting with 5a0dbc37fd00dbee807425424b81f0bffe04cdf84684d3fa89af274f28876692 not found: ID does not exist" containerID="5a0dbc37fd00dbee807425424b81f0bffe04cdf84684d3fa89af274f28876692" Jan 28 09:55:37 crc kubenswrapper[4735]: I0128 09:55:37.524487 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0dbc37fd00dbee807425424b81f0bffe04cdf84684d3fa89af274f28876692"} err="failed to get container status \"5a0dbc37fd00dbee807425424b81f0bffe04cdf84684d3fa89af274f28876692\": rpc error: code = NotFound desc = could not find container \"5a0dbc37fd00dbee807425424b81f0bffe04cdf84684d3fa89af274f28876692\": container with ID starting with 5a0dbc37fd00dbee807425424b81f0bffe04cdf84684d3fa89af274f28876692 not found: ID does not exist" Jan 28 09:55:37 crc kubenswrapper[4735]: I0128 09:55:37.524525 4735 scope.go:117] "RemoveContainer" containerID="6a885af02a678eb954dc508dd6cac7789432ecbbae4129e2cb5830987743508d" Jan 28 09:55:37 crc kubenswrapper[4735]: E0128 09:55:37.525074 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a885af02a678eb954dc508dd6cac7789432ecbbae4129e2cb5830987743508d\": container with ID starting with 6a885af02a678eb954dc508dd6cac7789432ecbbae4129e2cb5830987743508d not found: ID does not exist" containerID="6a885af02a678eb954dc508dd6cac7789432ecbbae4129e2cb5830987743508d" Jan 28 09:55:37 crc kubenswrapper[4735]: I0128 09:55:37.525132 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a885af02a678eb954dc508dd6cac7789432ecbbae4129e2cb5830987743508d"} err="failed to get container status \"6a885af02a678eb954dc508dd6cac7789432ecbbae4129e2cb5830987743508d\": rpc error: code = NotFound desc = could not find container \"6a885af02a678eb954dc508dd6cac7789432ecbbae4129e2cb5830987743508d\": container with ID starting with 6a885af02a678eb954dc508dd6cac7789432ecbbae4129e2cb5830987743508d not found: ID does not exist" Jan 28 09:55:37 crc kubenswrapper[4735]: I0128 09:55:37.525172 4735 scope.go:117] "RemoveContainer" containerID="559eb35f659a99c8d14fbb035dbabdb6a6c41123eef68335b55d452ec5d3290e" Jan 28 09:55:37 crc kubenswrapper[4735]: E0128 09:55:37.525602 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"559eb35f659a99c8d14fbb035dbabdb6a6c41123eef68335b55d452ec5d3290e\": container with ID starting with 559eb35f659a99c8d14fbb035dbabdb6a6c41123eef68335b55d452ec5d3290e not found: ID does not exist" containerID="559eb35f659a99c8d14fbb035dbabdb6a6c41123eef68335b55d452ec5d3290e" Jan 28 09:55:37 crc kubenswrapper[4735]: I0128 09:55:37.525638 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"559eb35f659a99c8d14fbb035dbabdb6a6c41123eef68335b55d452ec5d3290e"} err="failed to get container status \"559eb35f659a99c8d14fbb035dbabdb6a6c41123eef68335b55d452ec5d3290e\": rpc error: code = NotFound desc = could not find container \"559eb35f659a99c8d14fbb035dbabdb6a6c41123eef68335b55d452ec5d3290e\": container with ID starting with 559eb35f659a99c8d14fbb035dbabdb6a6c41123eef68335b55d452ec5d3290e not found: ID does not exist" Jan 28 09:55:37 crc kubenswrapper[4735]: I0128 09:55:37.969929 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-v9d5m" event={"ID":"855f8b5d-befc-4d5d-9653-c9820ca8eafd","Type":"ContainerStarted","Data":"81f0b8f03f619d32852104dc0ad89bb564ebd4c7d46beed26db8004ea4e0b2ac"} Jan 28 09:55:37 crc kubenswrapper[4735]: I0128 09:55:37.987802 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-v9d5m" podStartSLOduration=1.562331553 podStartE2EDuration="6.987781005s" podCreationTimestamp="2026-01-28 09:55:31 +0000 UTC" firstStartedPulling="2026-01-28 09:55:32.088850199 +0000 UTC m=+785.238514572" lastFinishedPulling="2026-01-28 09:55:37.514299641 +0000 UTC m=+790.663964024" observedRunningTime="2026-01-28 09:55:37.983201701 +0000 UTC m=+791.132866074" watchObservedRunningTime="2026-01-28 09:55:37.987781005 +0000 UTC m=+791.137445378" Jan 28 09:55:41 crc kubenswrapper[4735]: I0128 09:55:41.345151 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sbpw9" Jan 28 09:55:41 crc kubenswrapper[4735]: I0128 09:55:41.345225 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sbpw9" Jan 28 09:55:41 crc kubenswrapper[4735]: I0128 09:55:41.417544 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sbpw9" Jan 28 09:55:41 crc kubenswrapper[4735]: I0128 09:55:41.782648 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-7pft9" Jan 28 09:55:42 crc kubenswrapper[4735]: I0128 09:55:42.061016 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sbpw9" Jan 28 09:55:42 crc kubenswrapper[4735]: I0128 09:55:42.116795 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:42 crc kubenswrapper[4735]: I0128 09:55:42.116884 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:42 crc kubenswrapper[4735]: I0128 09:55:42.122040 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:42 crc kubenswrapper[4735]: I0128 09:55:42.604130 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sbpw9"] Jan 28 09:55:43 crc kubenswrapper[4735]: I0128 09:55:43.009881 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5f7d7bbfcb-wmmqq" Jan 28 09:55:43 crc kubenswrapper[4735]: I0128 09:55:43.071434 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6dsdg"] Jan 28 09:55:44 crc kubenswrapper[4735]: I0128 09:55:44.009200 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sbpw9" podUID="d5d3d6bc-e91f-480a-a19c-46fa029c7e4b" containerName="registry-server" containerID="cri-o://e1c9a834b3ce731eb053c922e1f61f98a0d88115a444fa1167e934f14f70215d" gracePeriod=2 Jan 28 09:55:44 crc kubenswrapper[4735]: I0128 09:55:44.363368 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbpw9" Jan 28 09:55:44 crc kubenswrapper[4735]: I0128 09:55:44.490043 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-catalog-content\") pod \"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b\" (UID: \"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b\") " Jan 28 09:55:44 crc kubenswrapper[4735]: I0128 09:55:44.490102 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn76g\" (UniqueName: \"kubernetes.io/projected/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-kube-api-access-wn76g\") pod \"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b\" (UID: \"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b\") " Jan 28 09:55:44 crc kubenswrapper[4735]: I0128 09:55:44.490190 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-utilities\") pod \"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b\" (UID: \"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b\") " Jan 28 09:55:44 crc kubenswrapper[4735]: I0128 09:55:44.491325 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-utilities" (OuterVolumeSpecName: "utilities") pod "d5d3d6bc-e91f-480a-a19c-46fa029c7e4b" (UID: "d5d3d6bc-e91f-480a-a19c-46fa029c7e4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:55:44 crc kubenswrapper[4735]: I0128 09:55:44.496856 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-kube-api-access-wn76g" (OuterVolumeSpecName: "kube-api-access-wn76g") pod "d5d3d6bc-e91f-480a-a19c-46fa029c7e4b" (UID: "d5d3d6bc-e91f-480a-a19c-46fa029c7e4b"). InnerVolumeSpecName "kube-api-access-wn76g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:55:44 crc kubenswrapper[4735]: I0128 09:55:44.542022 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5d3d6bc-e91f-480a-a19c-46fa029c7e4b" (UID: "d5d3d6bc-e91f-480a-a19c-46fa029c7e4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:55:44 crc kubenswrapper[4735]: I0128 09:55:44.592602 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn76g\" (UniqueName: \"kubernetes.io/projected/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-kube-api-access-wn76g\") on node \"crc\" DevicePath \"\"" Jan 28 09:55:44 crc kubenswrapper[4735]: I0128 09:55:44.592654 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 09:55:44 crc kubenswrapper[4735]: I0128 09:55:44.592666 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 09:55:45 crc kubenswrapper[4735]: I0128 09:55:45.016643 4735 generic.go:334] "Generic (PLEG): container finished" podID="d5d3d6bc-e91f-480a-a19c-46fa029c7e4b" containerID="e1c9a834b3ce731eb053c922e1f61f98a0d88115a444fa1167e934f14f70215d" exitCode=0 Jan 28 09:55:45 crc kubenswrapper[4735]: I0128 09:55:45.016702 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbpw9" event={"ID":"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b","Type":"ContainerDied","Data":"e1c9a834b3ce731eb053c922e1f61f98a0d88115a444fa1167e934f14f70215d"} Jan 28 09:55:45 crc kubenswrapper[4735]: I0128 09:55:45.016734 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbpw9" event={"ID":"d5d3d6bc-e91f-480a-a19c-46fa029c7e4b","Type":"ContainerDied","Data":"c4455012e83220967f2e47a264c0e18b600e6d1c85ac61dacd3e259c3332148b"} Jan 28 09:55:45 crc kubenswrapper[4735]: I0128 09:55:45.016769 4735 scope.go:117] "RemoveContainer" containerID="e1c9a834b3ce731eb053c922e1f61f98a0d88115a444fa1167e934f14f70215d" Jan 28 09:55:45 crc kubenswrapper[4735]: I0128 09:55:45.017350 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbpw9" Jan 28 09:55:45 crc kubenswrapper[4735]: I0128 09:55:45.036244 4735 scope.go:117] "RemoveContainer" containerID="d902888eb622005a2e3d68f6695210eac7d5e659f4adbb22bb1d4e00a616d50e" Jan 28 09:55:45 crc kubenswrapper[4735]: I0128 09:55:45.052806 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sbpw9"] Jan 28 09:55:45 crc kubenswrapper[4735]: I0128 09:55:45.057350 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sbpw9"] Jan 28 09:55:45 crc kubenswrapper[4735]: I0128 09:55:45.061168 4735 scope.go:117] "RemoveContainer" containerID="b23eb4f578e54997fb1953defffe25ac4ed2b27393eb3dd18db058c22a603a9f" Jan 28 09:55:45 crc kubenswrapper[4735]: I0128 09:55:45.076825 4735 scope.go:117] "RemoveContainer" containerID="e1c9a834b3ce731eb053c922e1f61f98a0d88115a444fa1167e934f14f70215d" Jan 28 09:55:45 crc kubenswrapper[4735]: E0128 09:55:45.077242 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1c9a834b3ce731eb053c922e1f61f98a0d88115a444fa1167e934f14f70215d\": container with ID starting with e1c9a834b3ce731eb053c922e1f61f98a0d88115a444fa1167e934f14f70215d not found: ID does not exist" containerID="e1c9a834b3ce731eb053c922e1f61f98a0d88115a444fa1167e934f14f70215d" Jan 28 09:55:45 crc kubenswrapper[4735]: I0128 09:55:45.077291 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1c9a834b3ce731eb053c922e1f61f98a0d88115a444fa1167e934f14f70215d"} err="failed to get container status \"e1c9a834b3ce731eb053c922e1f61f98a0d88115a444fa1167e934f14f70215d\": rpc error: code = NotFound desc = could not find container \"e1c9a834b3ce731eb053c922e1f61f98a0d88115a444fa1167e934f14f70215d\": container with ID starting with e1c9a834b3ce731eb053c922e1f61f98a0d88115a444fa1167e934f14f70215d not found: ID does not exist" Jan 28 09:55:45 crc kubenswrapper[4735]: I0128 09:55:45.077313 4735 scope.go:117] "RemoveContainer" containerID="d902888eb622005a2e3d68f6695210eac7d5e659f4adbb22bb1d4e00a616d50e" Jan 28 09:55:45 crc kubenswrapper[4735]: E0128 09:55:45.077737 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d902888eb622005a2e3d68f6695210eac7d5e659f4adbb22bb1d4e00a616d50e\": container with ID starting with d902888eb622005a2e3d68f6695210eac7d5e659f4adbb22bb1d4e00a616d50e not found: ID does not exist" containerID="d902888eb622005a2e3d68f6695210eac7d5e659f4adbb22bb1d4e00a616d50e" Jan 28 09:55:45 crc kubenswrapper[4735]: I0128 09:55:45.077787 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d902888eb622005a2e3d68f6695210eac7d5e659f4adbb22bb1d4e00a616d50e"} err="failed to get container status \"d902888eb622005a2e3d68f6695210eac7d5e659f4adbb22bb1d4e00a616d50e\": rpc error: code = NotFound desc = could not find container \"d902888eb622005a2e3d68f6695210eac7d5e659f4adbb22bb1d4e00a616d50e\": container with ID starting with d902888eb622005a2e3d68f6695210eac7d5e659f4adbb22bb1d4e00a616d50e not found: ID does not exist" Jan 28 09:55:45 crc kubenswrapper[4735]: I0128 09:55:45.077815 4735 scope.go:117] "RemoveContainer" containerID="b23eb4f578e54997fb1953defffe25ac4ed2b27393eb3dd18db058c22a603a9f" Jan 28 09:55:45 crc kubenswrapper[4735]: E0128 09:55:45.078198 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b23eb4f578e54997fb1953defffe25ac4ed2b27393eb3dd18db058c22a603a9f\": container with ID starting with b23eb4f578e54997fb1953defffe25ac4ed2b27393eb3dd18db058c22a603a9f not found: ID does not exist" containerID="b23eb4f578e54997fb1953defffe25ac4ed2b27393eb3dd18db058c22a603a9f" Jan 28 09:55:45 crc kubenswrapper[4735]: I0128 09:55:45.078219 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b23eb4f578e54997fb1953defffe25ac4ed2b27393eb3dd18db058c22a603a9f"} err="failed to get container status \"b23eb4f578e54997fb1953defffe25ac4ed2b27393eb3dd18db058c22a603a9f\": rpc error: code = NotFound desc = could not find container \"b23eb4f578e54997fb1953defffe25ac4ed2b27393eb3dd18db058c22a603a9f\": container with ID starting with b23eb4f578e54997fb1953defffe25ac4ed2b27393eb3dd18db058c22a603a9f not found: ID does not exist" Jan 28 09:55:45 crc kubenswrapper[4735]: I0128 09:55:45.511952 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5d3d6bc-e91f-480a-a19c-46fa029c7e4b" path="/var/lib/kubelet/pods/d5d3d6bc-e91f-480a-a19c-46fa029c7e4b/volumes" Jan 28 09:55:52 crc kubenswrapper[4735]: I0128 09:55:52.320968 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-h28tw" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.143524 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46"] Jan 28 09:56:05 crc kubenswrapper[4735]: E0128 09:56:05.144655 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbc89ca-3e14-4e8d-ad36-684f3fd5061f" containerName="registry-server" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.144674 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbc89ca-3e14-4e8d-ad36-684f3fd5061f" containerName="registry-server" Jan 28 09:56:05 crc kubenswrapper[4735]: E0128 09:56:05.144696 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5d3d6bc-e91f-480a-a19c-46fa029c7e4b" containerName="extract-content" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.144703 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5d3d6bc-e91f-480a-a19c-46fa029c7e4b" containerName="extract-content" Jan 28 09:56:05 crc kubenswrapper[4735]: E0128 09:56:05.144712 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbc89ca-3e14-4e8d-ad36-684f3fd5061f" containerName="extract-content" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.144719 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbc89ca-3e14-4e8d-ad36-684f3fd5061f" containerName="extract-content" Jan 28 09:56:05 crc kubenswrapper[4735]: E0128 09:56:05.144731 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5d3d6bc-e91f-480a-a19c-46fa029c7e4b" containerName="registry-server" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.144738 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5d3d6bc-e91f-480a-a19c-46fa029c7e4b" containerName="registry-server" Jan 28 09:56:05 crc kubenswrapper[4735]: E0128 09:56:05.144770 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5d3d6bc-e91f-480a-a19c-46fa029c7e4b" containerName="extract-utilities" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.144778 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5d3d6bc-e91f-480a-a19c-46fa029c7e4b" containerName="extract-utilities" Jan 28 09:56:05 crc kubenswrapper[4735]: E0128 09:56:05.144786 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbc89ca-3e14-4e8d-ad36-684f3fd5061f" containerName="extract-utilities" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.144794 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbc89ca-3e14-4e8d-ad36-684f3fd5061f" containerName="extract-utilities" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.144913 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5d3d6bc-e91f-480a-a19c-46fa029c7e4b" containerName="registry-server" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.144924 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dbc89ca-3e14-4e8d-ad36-684f3fd5061f" containerName="registry-server" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.145931 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.148488 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.160578 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46"] Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.308371 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54d4636f-758a-41b9-9491-69cb7f38afbc-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46\" (UID: \"54d4636f-758a-41b9-9491-69cb7f38afbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.308471 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54d4636f-758a-41b9-9491-69cb7f38afbc-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46\" (UID: \"54d4636f-758a-41b9-9491-69cb7f38afbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.308526 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdzn8\" (UniqueName: \"kubernetes.io/projected/54d4636f-758a-41b9-9491-69cb7f38afbc-kube-api-access-rdzn8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46\" (UID: \"54d4636f-758a-41b9-9491-69cb7f38afbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.409795 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54d4636f-758a-41b9-9491-69cb7f38afbc-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46\" (UID: \"54d4636f-758a-41b9-9491-69cb7f38afbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.409886 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54d4636f-758a-41b9-9491-69cb7f38afbc-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46\" (UID: \"54d4636f-758a-41b9-9491-69cb7f38afbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.409937 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdzn8\" (UniqueName: \"kubernetes.io/projected/54d4636f-758a-41b9-9491-69cb7f38afbc-kube-api-access-rdzn8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46\" (UID: \"54d4636f-758a-41b9-9491-69cb7f38afbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.411831 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54d4636f-758a-41b9-9491-69cb7f38afbc-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46\" (UID: \"54d4636f-758a-41b9-9491-69cb7f38afbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.412542 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54d4636f-758a-41b9-9491-69cb7f38afbc-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46\" (UID: \"54d4636f-758a-41b9-9491-69cb7f38afbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.429968 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.430140 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.430227 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.430906 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ffb90dbb3d67df0d3c11867f5146dc5bfc3eacdd73311f14c6982a1e91ecde2d"} pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.431090 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" containerID="cri-o://ffb90dbb3d67df0d3c11867f5146dc5bfc3eacdd73311f14c6982a1e91ecde2d" gracePeriod=600 Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.439966 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdzn8\" (UniqueName: \"kubernetes.io/projected/54d4636f-758a-41b9-9491-69cb7f38afbc-kube-api-access-rdzn8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46\" (UID: \"54d4636f-758a-41b9-9491-69cb7f38afbc\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.521487 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" Jan 28 09:56:05 crc kubenswrapper[4735]: I0128 09:56:05.947879 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46"] Jan 28 09:56:06 crc kubenswrapper[4735]: I0128 09:56:06.152609 4735 generic.go:334] "Generic (PLEG): container finished" podID="54d4636f-758a-41b9-9491-69cb7f38afbc" containerID="5c58f15d902c2e74604e997540c77ddcb40547f1b2c679b5988bca0c94d52cc4" exitCode=0 Jan 28 09:56:06 crc kubenswrapper[4735]: I0128 09:56:06.152698 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" event={"ID":"54d4636f-758a-41b9-9491-69cb7f38afbc","Type":"ContainerDied","Data":"5c58f15d902c2e74604e997540c77ddcb40547f1b2c679b5988bca0c94d52cc4"} Jan 28 09:56:06 crc kubenswrapper[4735]: I0128 09:56:06.153024 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" event={"ID":"54d4636f-758a-41b9-9491-69cb7f38afbc","Type":"ContainerStarted","Data":"0abf1f518c70d219ef3d8d35d2e9270680f9fb48bfb04b2cf18f8d97bfead1dc"} Jan 28 09:56:06 crc kubenswrapper[4735]: I0128 09:56:06.159892 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerID="ffb90dbb3d67df0d3c11867f5146dc5bfc3eacdd73311f14c6982a1e91ecde2d" exitCode=0 Jan 28 09:56:06 crc kubenswrapper[4735]: I0128 09:56:06.159934 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerDied","Data":"ffb90dbb3d67df0d3c11867f5146dc5bfc3eacdd73311f14c6982a1e91ecde2d"} Jan 28 09:56:06 crc kubenswrapper[4735]: I0128 09:56:06.159992 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"426bd6318f22bcaa2e01148fa51d1f918d9825a66fecb193aef53ed2e7aff201"} Jan 28 09:56:06 crc kubenswrapper[4735]: I0128 09:56:06.160024 4735 scope.go:117] "RemoveContainer" containerID="cdd11a1295f7313637000d20b5c416225b55aabcfd028387bb27b194be3b0438" Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.129036 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-6dsdg" podUID="bfc4463e-9f59-4e22-b40d-ebf47cc84969" containerName="console" containerID="cri-o://7233ecf6a42789da22c2a6d79e493c9f9e00511fb083c1ff059984c317eb7fee" gracePeriod=15 Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.178871 4735 generic.go:334] "Generic (PLEG): container finished" podID="54d4636f-758a-41b9-9491-69cb7f38afbc" containerID="d6cad5dbe88f086ba3e3ee35c4e15fc19969319faf1e5383d5a2eb1369569769" exitCode=0 Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.178919 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" event={"ID":"54d4636f-758a-41b9-9491-69cb7f38afbc","Type":"ContainerDied","Data":"d6cad5dbe88f086ba3e3ee35c4e15fc19969319faf1e5383d5a2eb1369569769"} Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.573838 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6dsdg_bfc4463e-9f59-4e22-b40d-ebf47cc84969/console/0.log" Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.574204 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.659492 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-service-ca\") pod \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.659568 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n945q\" (UniqueName: \"kubernetes.io/projected/bfc4463e-9f59-4e22-b40d-ebf47cc84969-kube-api-access-n945q\") pod \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.659605 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-serving-cert\") pod \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.659732 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-config\") pod \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.659785 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-oauth-serving-cert\") pod \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.659808 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-trusted-ca-bundle\") pod \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.659828 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-oauth-config\") pod \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\" (UID: \"bfc4463e-9f59-4e22-b40d-ebf47cc84969\") " Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.660277 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-service-ca" (OuterVolumeSpecName: "service-ca") pod "bfc4463e-9f59-4e22-b40d-ebf47cc84969" (UID: "bfc4463e-9f59-4e22-b40d-ebf47cc84969"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.660338 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bfc4463e-9f59-4e22-b40d-ebf47cc84969" (UID: "bfc4463e-9f59-4e22-b40d-ebf47cc84969"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.660768 4735 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.660794 4735 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.661223 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-config" (OuterVolumeSpecName: "console-config") pod "bfc4463e-9f59-4e22-b40d-ebf47cc84969" (UID: "bfc4463e-9f59-4e22-b40d-ebf47cc84969"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.661234 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bfc4463e-9f59-4e22-b40d-ebf47cc84969" (UID: "bfc4463e-9f59-4e22-b40d-ebf47cc84969"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.665181 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bfc4463e-9f59-4e22-b40d-ebf47cc84969" (UID: "bfc4463e-9f59-4e22-b40d-ebf47cc84969"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.665629 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bfc4463e-9f59-4e22-b40d-ebf47cc84969" (UID: "bfc4463e-9f59-4e22-b40d-ebf47cc84969"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.666070 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfc4463e-9f59-4e22-b40d-ebf47cc84969-kube-api-access-n945q" (OuterVolumeSpecName: "kube-api-access-n945q") pod "bfc4463e-9f59-4e22-b40d-ebf47cc84969" (UID: "bfc4463e-9f59-4e22-b40d-ebf47cc84969"). InnerVolumeSpecName "kube-api-access-n945q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.762036 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n945q\" (UniqueName: \"kubernetes.io/projected/bfc4463e-9f59-4e22-b40d-ebf47cc84969-kube-api-access-n945q\") on node \"crc\" DevicePath \"\"" Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.762076 4735 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.762090 4735 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.762104 4735 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfc4463e-9f59-4e22-b40d-ebf47cc84969-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 09:56:08 crc kubenswrapper[4735]: I0128 09:56:08.762117 4735 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bfc4463e-9f59-4e22-b40d-ebf47cc84969-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:56:09 crc kubenswrapper[4735]: I0128 09:56:09.188944 4735 generic.go:334] "Generic (PLEG): container finished" podID="54d4636f-758a-41b9-9491-69cb7f38afbc" containerID="74e42313ac57ee8ae0a501d8144525ecc4b090d16240b63c203c0feb7f27283a" exitCode=0 Jan 28 09:56:09 crc kubenswrapper[4735]: I0128 09:56:09.189030 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" event={"ID":"54d4636f-758a-41b9-9491-69cb7f38afbc","Type":"ContainerDied","Data":"74e42313ac57ee8ae0a501d8144525ecc4b090d16240b63c203c0feb7f27283a"} Jan 28 09:56:09 crc kubenswrapper[4735]: I0128 09:56:09.191917 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6dsdg_bfc4463e-9f59-4e22-b40d-ebf47cc84969/console/0.log" Jan 28 09:56:09 crc kubenswrapper[4735]: I0128 09:56:09.191994 4735 generic.go:334] "Generic (PLEG): container finished" podID="bfc4463e-9f59-4e22-b40d-ebf47cc84969" containerID="7233ecf6a42789da22c2a6d79e493c9f9e00511fb083c1ff059984c317eb7fee" exitCode=2 Jan 28 09:56:09 crc kubenswrapper[4735]: I0128 09:56:09.192043 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6dsdg" event={"ID":"bfc4463e-9f59-4e22-b40d-ebf47cc84969","Type":"ContainerDied","Data":"7233ecf6a42789da22c2a6d79e493c9f9e00511fb083c1ff059984c317eb7fee"} Jan 28 09:56:09 crc kubenswrapper[4735]: I0128 09:56:09.192091 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6dsdg" event={"ID":"bfc4463e-9f59-4e22-b40d-ebf47cc84969","Type":"ContainerDied","Data":"a2925c4ebc4a2df87c687ddfef3a470bfad45b4031ce8ea94d053fa403286516"} Jan 28 09:56:09 crc kubenswrapper[4735]: I0128 09:56:09.192103 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6dsdg" Jan 28 09:56:09 crc kubenswrapper[4735]: I0128 09:56:09.192122 4735 scope.go:117] "RemoveContainer" containerID="7233ecf6a42789da22c2a6d79e493c9f9e00511fb083c1ff059984c317eb7fee" Jan 28 09:56:09 crc kubenswrapper[4735]: I0128 09:56:09.221162 4735 scope.go:117] "RemoveContainer" containerID="7233ecf6a42789da22c2a6d79e493c9f9e00511fb083c1ff059984c317eb7fee" Jan 28 09:56:09 crc kubenswrapper[4735]: E0128 09:56:09.221774 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7233ecf6a42789da22c2a6d79e493c9f9e00511fb083c1ff059984c317eb7fee\": container with ID starting with 7233ecf6a42789da22c2a6d79e493c9f9e00511fb083c1ff059984c317eb7fee not found: ID does not exist" containerID="7233ecf6a42789da22c2a6d79e493c9f9e00511fb083c1ff059984c317eb7fee" Jan 28 09:56:09 crc kubenswrapper[4735]: I0128 09:56:09.221819 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7233ecf6a42789da22c2a6d79e493c9f9e00511fb083c1ff059984c317eb7fee"} err="failed to get container status \"7233ecf6a42789da22c2a6d79e493c9f9e00511fb083c1ff059984c317eb7fee\": rpc error: code = NotFound desc = could not find container \"7233ecf6a42789da22c2a6d79e493c9f9e00511fb083c1ff059984c317eb7fee\": container with ID starting with 7233ecf6a42789da22c2a6d79e493c9f9e00511fb083c1ff059984c317eb7fee not found: ID does not exist" Jan 28 09:56:09 crc kubenswrapper[4735]: I0128 09:56:09.229669 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6dsdg"] Jan 28 09:56:09 crc kubenswrapper[4735]: I0128 09:56:09.237551 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-6dsdg"] Jan 28 09:56:09 crc kubenswrapper[4735]: I0128 09:56:09.516567 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfc4463e-9f59-4e22-b40d-ebf47cc84969" path="/var/lib/kubelet/pods/bfc4463e-9f59-4e22-b40d-ebf47cc84969/volumes" Jan 28 09:56:10 crc kubenswrapper[4735]: I0128 09:56:10.431113 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" Jan 28 09:56:10 crc kubenswrapper[4735]: I0128 09:56:10.585113 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdzn8\" (UniqueName: \"kubernetes.io/projected/54d4636f-758a-41b9-9491-69cb7f38afbc-kube-api-access-rdzn8\") pod \"54d4636f-758a-41b9-9491-69cb7f38afbc\" (UID: \"54d4636f-758a-41b9-9491-69cb7f38afbc\") " Jan 28 09:56:10 crc kubenswrapper[4735]: I0128 09:56:10.585763 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54d4636f-758a-41b9-9491-69cb7f38afbc-util\") pod \"54d4636f-758a-41b9-9491-69cb7f38afbc\" (UID: \"54d4636f-758a-41b9-9491-69cb7f38afbc\") " Jan 28 09:56:10 crc kubenswrapper[4735]: I0128 09:56:10.585992 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54d4636f-758a-41b9-9491-69cb7f38afbc-bundle\") pod \"54d4636f-758a-41b9-9491-69cb7f38afbc\" (UID: \"54d4636f-758a-41b9-9491-69cb7f38afbc\") " Jan 28 09:56:10 crc kubenswrapper[4735]: I0128 09:56:10.587236 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54d4636f-758a-41b9-9491-69cb7f38afbc-bundle" (OuterVolumeSpecName: "bundle") pod "54d4636f-758a-41b9-9491-69cb7f38afbc" (UID: "54d4636f-758a-41b9-9491-69cb7f38afbc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:56:10 crc kubenswrapper[4735]: I0128 09:56:10.593940 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54d4636f-758a-41b9-9491-69cb7f38afbc-kube-api-access-rdzn8" (OuterVolumeSpecName: "kube-api-access-rdzn8") pod "54d4636f-758a-41b9-9491-69cb7f38afbc" (UID: "54d4636f-758a-41b9-9491-69cb7f38afbc"). InnerVolumeSpecName "kube-api-access-rdzn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:56:10 crc kubenswrapper[4735]: I0128 09:56:10.599267 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54d4636f-758a-41b9-9491-69cb7f38afbc-util" (OuterVolumeSpecName: "util") pod "54d4636f-758a-41b9-9491-69cb7f38afbc" (UID: "54d4636f-758a-41b9-9491-69cb7f38afbc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:56:10 crc kubenswrapper[4735]: I0128 09:56:10.687436 4735 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54d4636f-758a-41b9-9491-69cb7f38afbc-util\") on node \"crc\" DevicePath \"\"" Jan 28 09:56:10 crc kubenswrapper[4735]: I0128 09:56:10.687477 4735 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54d4636f-758a-41b9-9491-69cb7f38afbc-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 09:56:10 crc kubenswrapper[4735]: I0128 09:56:10.687488 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdzn8\" (UniqueName: \"kubernetes.io/projected/54d4636f-758a-41b9-9491-69cb7f38afbc-kube-api-access-rdzn8\") on node \"crc\" DevicePath \"\"" Jan 28 09:56:11 crc kubenswrapper[4735]: I0128 09:56:11.208166 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" event={"ID":"54d4636f-758a-41b9-9491-69cb7f38afbc","Type":"ContainerDied","Data":"0abf1f518c70d219ef3d8d35d2e9270680f9fb48bfb04b2cf18f8d97bfead1dc"} Jan 28 09:56:11 crc kubenswrapper[4735]: I0128 09:56:11.208205 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0abf1f518c70d219ef3d8d35d2e9270680f9fb48bfb04b2cf18f8d97bfead1dc" Jan 28 09:56:11 crc kubenswrapper[4735]: I0128 09:56:11.208245 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.572037 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7"] Jan 28 09:56:19 crc kubenswrapper[4735]: E0128 09:56:19.572727 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d4636f-758a-41b9-9491-69cb7f38afbc" containerName="extract" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.572739 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d4636f-758a-41b9-9491-69cb7f38afbc" containerName="extract" Jan 28 09:56:19 crc kubenswrapper[4735]: E0128 09:56:19.572767 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc4463e-9f59-4e22-b40d-ebf47cc84969" containerName="console" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.572773 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc4463e-9f59-4e22-b40d-ebf47cc84969" containerName="console" Jan 28 09:56:19 crc kubenswrapper[4735]: E0128 09:56:19.572807 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d4636f-758a-41b9-9491-69cb7f38afbc" containerName="pull" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.572813 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d4636f-758a-41b9-9491-69cb7f38afbc" containerName="pull" Jan 28 09:56:19 crc kubenswrapper[4735]: E0128 09:56:19.572822 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d4636f-758a-41b9-9491-69cb7f38afbc" containerName="util" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.572828 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d4636f-758a-41b9-9491-69cb7f38afbc" containerName="util" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.572915 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfc4463e-9f59-4e22-b40d-ebf47cc84969" containerName="console" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.572927 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d4636f-758a-41b9-9491-69cb7f38afbc" containerName="extract" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.573336 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.574998 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.575374 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.575545 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.576999 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-zj6c4" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.577870 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.593176 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7"] Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.700979 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8h52\" (UniqueName: \"kubernetes.io/projected/8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad-kube-api-access-l8h52\") pod \"metallb-operator-controller-manager-54979c8865-g4cx7\" (UID: \"8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad\") " pod="metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.701044 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad-apiservice-cert\") pod \"metallb-operator-controller-manager-54979c8865-g4cx7\" (UID: \"8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad\") " pod="metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.701125 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad-webhook-cert\") pod \"metallb-operator-controller-manager-54979c8865-g4cx7\" (UID: \"8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad\") " pod="metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.802134 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad-apiservice-cert\") pod \"metallb-operator-controller-manager-54979c8865-g4cx7\" (UID: \"8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad\") " pod="metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.802214 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad-webhook-cert\") pod \"metallb-operator-controller-manager-54979c8865-g4cx7\" (UID: \"8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad\") " pod="metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.802252 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8h52\" (UniqueName: \"kubernetes.io/projected/8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad-kube-api-access-l8h52\") pod \"metallb-operator-controller-manager-54979c8865-g4cx7\" (UID: \"8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad\") " pod="metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.811221 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t"] Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.812124 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.812157 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad-apiservice-cert\") pod \"metallb-operator-controller-manager-54979c8865-g4cx7\" (UID: \"8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad\") " pod="metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.812198 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad-webhook-cert\") pod \"metallb-operator-controller-manager-54979c8865-g4cx7\" (UID: \"8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad\") " pod="metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.813873 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.814069 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-k2j7g" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.814248 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.831003 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t"] Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.835774 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8h52\" (UniqueName: \"kubernetes.io/projected/8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad-kube-api-access-l8h52\") pod \"metallb-operator-controller-manager-54979c8865-g4cx7\" (UID: \"8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad\") " pod="metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.889194 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.903774 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3da5666b-ceb3-488f-ac22-1f25ba7cea7f-webhook-cert\") pod \"metallb-operator-webhook-server-cb99f596-8vf2t\" (UID: \"3da5666b-ceb3-488f-ac22-1f25ba7cea7f\") " pod="metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.903869 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3da5666b-ceb3-488f-ac22-1f25ba7cea7f-apiservice-cert\") pod \"metallb-operator-webhook-server-cb99f596-8vf2t\" (UID: \"3da5666b-ceb3-488f-ac22-1f25ba7cea7f\") " pod="metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t" Jan 28 09:56:19 crc kubenswrapper[4735]: I0128 09:56:19.903943 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb2gl\" (UniqueName: \"kubernetes.io/projected/3da5666b-ceb3-488f-ac22-1f25ba7cea7f-kube-api-access-tb2gl\") pod \"metallb-operator-webhook-server-cb99f596-8vf2t\" (UID: \"3da5666b-ceb3-488f-ac22-1f25ba7cea7f\") " pod="metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t" Jan 28 09:56:20 crc kubenswrapper[4735]: I0128 09:56:20.005259 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb2gl\" (UniqueName: \"kubernetes.io/projected/3da5666b-ceb3-488f-ac22-1f25ba7cea7f-kube-api-access-tb2gl\") pod \"metallb-operator-webhook-server-cb99f596-8vf2t\" (UID: \"3da5666b-ceb3-488f-ac22-1f25ba7cea7f\") " pod="metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t" Jan 28 09:56:20 crc kubenswrapper[4735]: I0128 09:56:20.005325 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3da5666b-ceb3-488f-ac22-1f25ba7cea7f-webhook-cert\") pod \"metallb-operator-webhook-server-cb99f596-8vf2t\" (UID: \"3da5666b-ceb3-488f-ac22-1f25ba7cea7f\") " pod="metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t" Jan 28 09:56:20 crc kubenswrapper[4735]: I0128 09:56:20.005370 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3da5666b-ceb3-488f-ac22-1f25ba7cea7f-apiservice-cert\") pod \"metallb-operator-webhook-server-cb99f596-8vf2t\" (UID: \"3da5666b-ceb3-488f-ac22-1f25ba7cea7f\") " pod="metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t" Jan 28 09:56:20 crc kubenswrapper[4735]: I0128 09:56:20.013590 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3da5666b-ceb3-488f-ac22-1f25ba7cea7f-apiservice-cert\") pod \"metallb-operator-webhook-server-cb99f596-8vf2t\" (UID: \"3da5666b-ceb3-488f-ac22-1f25ba7cea7f\") " pod="metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t" Jan 28 09:56:20 crc kubenswrapper[4735]: I0128 09:56:20.013608 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3da5666b-ceb3-488f-ac22-1f25ba7cea7f-webhook-cert\") pod \"metallb-operator-webhook-server-cb99f596-8vf2t\" (UID: \"3da5666b-ceb3-488f-ac22-1f25ba7cea7f\") " pod="metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t" Jan 28 09:56:20 crc kubenswrapper[4735]: I0128 09:56:20.044958 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb2gl\" (UniqueName: \"kubernetes.io/projected/3da5666b-ceb3-488f-ac22-1f25ba7cea7f-kube-api-access-tb2gl\") pod \"metallb-operator-webhook-server-cb99f596-8vf2t\" (UID: \"3da5666b-ceb3-488f-ac22-1f25ba7cea7f\") " pod="metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t" Jan 28 09:56:20 crc kubenswrapper[4735]: I0128 09:56:20.154384 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7"] Jan 28 09:56:20 crc kubenswrapper[4735]: W0128 09:56:20.162526 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d49adc6_130a_4b8b_8c7b_ab35d0fc5bad.slice/crio-89ff603fd16907d44505f94c61de80f8937d6267e3af8346ce9e70404f442e09 WatchSource:0}: Error finding container 89ff603fd16907d44505f94c61de80f8937d6267e3af8346ce9e70404f442e09: Status 404 returned error can't find the container with id 89ff603fd16907d44505f94c61de80f8937d6267e3af8346ce9e70404f442e09 Jan 28 09:56:20 crc kubenswrapper[4735]: I0128 09:56:20.184544 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t" Jan 28 09:56:20 crc kubenswrapper[4735]: I0128 09:56:20.266328 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7" event={"ID":"8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad","Type":"ContainerStarted","Data":"89ff603fd16907d44505f94c61de80f8937d6267e3af8346ce9e70404f442e09"} Jan 28 09:56:20 crc kubenswrapper[4735]: I0128 09:56:20.610530 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t"] Jan 28 09:56:20 crc kubenswrapper[4735]: W0128 09:56:20.620072 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3da5666b_ceb3_488f_ac22_1f25ba7cea7f.slice/crio-8b78838da7d33571977574b4ecfe14bf83f5205b97810fda422f53ad48e9ff94 WatchSource:0}: Error finding container 8b78838da7d33571977574b4ecfe14bf83f5205b97810fda422f53ad48e9ff94: Status 404 returned error can't find the container with id 8b78838da7d33571977574b4ecfe14bf83f5205b97810fda422f53ad48e9ff94 Jan 28 09:56:21 crc kubenswrapper[4735]: I0128 09:56:21.271675 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t" event={"ID":"3da5666b-ceb3-488f-ac22-1f25ba7cea7f","Type":"ContainerStarted","Data":"8b78838da7d33571977574b4ecfe14bf83f5205b97810fda422f53ad48e9ff94"} Jan 28 09:56:24 crc kubenswrapper[4735]: I0128 09:56:24.294699 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7" event={"ID":"8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad","Type":"ContainerStarted","Data":"64a7a28c6e00635e7c5722459057b0d5f5837a4ca603a0adadc1d10ca8cd4473"} Jan 28 09:56:24 crc kubenswrapper[4735]: I0128 09:56:24.296251 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7" Jan 28 09:56:24 crc kubenswrapper[4735]: I0128 09:56:24.315134 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7" podStartSLOduration=2.339407474 podStartE2EDuration="5.315116558s" podCreationTimestamp="2026-01-28 09:56:19 +0000 UTC" firstStartedPulling="2026-01-28 09:56:20.164674899 +0000 UTC m=+833.314339272" lastFinishedPulling="2026-01-28 09:56:23.140383983 +0000 UTC m=+836.290048356" observedRunningTime="2026-01-28 09:56:24.312952795 +0000 UTC m=+837.462617168" watchObservedRunningTime="2026-01-28 09:56:24.315116558 +0000 UTC m=+837.464780931" Jan 28 09:56:26 crc kubenswrapper[4735]: I0128 09:56:26.323386 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t" event={"ID":"3da5666b-ceb3-488f-ac22-1f25ba7cea7f","Type":"ContainerStarted","Data":"5c281aa93b611e2bec44f77571eea224f0f172d0fd4e37253f03c6e45f699f08"} Jan 28 09:56:26 crc kubenswrapper[4735]: I0128 09:56:26.323833 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t" Jan 28 09:56:26 crc kubenswrapper[4735]: I0128 09:56:26.342908 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t" podStartSLOduration=1.8403040069999999 podStartE2EDuration="7.342888962s" podCreationTimestamp="2026-01-28 09:56:19 +0000 UTC" firstStartedPulling="2026-01-28 09:56:20.623905769 +0000 UTC m=+833.773570152" lastFinishedPulling="2026-01-28 09:56:26.126490734 +0000 UTC m=+839.276155107" observedRunningTime="2026-01-28 09:56:26.340979224 +0000 UTC m=+839.490643597" watchObservedRunningTime="2026-01-28 09:56:26.342888962 +0000 UTC m=+839.492553335" Jan 28 09:56:40 crc kubenswrapper[4735]: I0128 09:56:40.190661 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-cb99f596-8vf2t" Jan 28 09:56:59 crc kubenswrapper[4735]: I0128 09:56:59.893099 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-54979c8865-g4cx7" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.721457 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-g8zjj"] Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.724675 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.729047 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.729064 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.729118 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-nc5r7" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.734087 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qk849"] Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.735379 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qk849" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.738528 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.746220 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qk849"] Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.768125 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/40f6778a-2f24-4426-9292-5a394fe09d3d-frr-sockets\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.768170 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/40f6778a-2f24-4426-9292-5a394fe09d3d-frr-startup\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.768189 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/40f6778a-2f24-4426-9292-5a394fe09d3d-reloader\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.768204 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/40f6778a-2f24-4426-9292-5a394fe09d3d-metrics\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.768225 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40f6778a-2f24-4426-9292-5a394fe09d3d-metrics-certs\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.768250 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgsp8\" (UniqueName: \"kubernetes.io/projected/0042167e-002b-4fa7-8ff3-161e1326fae1-kube-api-access-dgsp8\") pod \"frr-k8s-webhook-server-7df86c4f6c-qk849\" (UID: \"0042167e-002b-4fa7-8ff3-161e1326fae1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qk849" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.768270 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/40f6778a-2f24-4426-9292-5a394fe09d3d-frr-conf\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.768294 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn2kr\" (UniqueName: \"kubernetes.io/projected/40f6778a-2f24-4426-9292-5a394fe09d3d-kube-api-access-gn2kr\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.768315 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0042167e-002b-4fa7-8ff3-161e1326fae1-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qk849\" (UID: \"0042167e-002b-4fa7-8ff3-161e1326fae1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qk849" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.840622 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-lsbrt"] Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.841618 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-lsbrt" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.844058 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.859261 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-wkrfv"] Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.860210 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wkrfv" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.862703 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-8d48j" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.862959 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.862987 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.863071 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.871698 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/40f6778a-2f24-4426-9292-5a394fe09d3d-frr-sockets\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.871760 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/40f6778a-2f24-4426-9292-5a394fe09d3d-frr-startup\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.871785 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/40f6778a-2f24-4426-9292-5a394fe09d3d-reloader\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.871805 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/40f6778a-2f24-4426-9292-5a394fe09d3d-metrics\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.871833 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40f6778a-2f24-4426-9292-5a394fe09d3d-metrics-certs\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.871864 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgsp8\" (UniqueName: \"kubernetes.io/projected/0042167e-002b-4fa7-8ff3-161e1326fae1-kube-api-access-dgsp8\") pod \"frr-k8s-webhook-server-7df86c4f6c-qk849\" (UID: \"0042167e-002b-4fa7-8ff3-161e1326fae1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qk849" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.871889 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/40f6778a-2f24-4426-9292-5a394fe09d3d-frr-conf\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.871916 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn2kr\" (UniqueName: \"kubernetes.io/projected/40f6778a-2f24-4426-9292-5a394fe09d3d-kube-api-access-gn2kr\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.871944 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0042167e-002b-4fa7-8ff3-161e1326fae1-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qk849\" (UID: \"0042167e-002b-4fa7-8ff3-161e1326fae1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qk849" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.872828 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-lsbrt"] Jan 28 09:57:00 crc kubenswrapper[4735]: E0128 09:57:00.873100 4735 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 28 09:57:00 crc kubenswrapper[4735]: E0128 09:57:00.873138 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40f6778a-2f24-4426-9292-5a394fe09d3d-metrics-certs podName:40f6778a-2f24-4426-9292-5a394fe09d3d nodeName:}" failed. No retries permitted until 2026-01-28 09:57:01.373123259 +0000 UTC m=+874.522787632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40f6778a-2f24-4426-9292-5a394fe09d3d-metrics-certs") pod "frr-k8s-g8zjj" (UID: "40f6778a-2f24-4426-9292-5a394fe09d3d") : secret "frr-k8s-certs-secret" not found Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.873341 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/40f6778a-2f24-4426-9292-5a394fe09d3d-metrics\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.873771 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/40f6778a-2f24-4426-9292-5a394fe09d3d-reloader\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.873969 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/40f6778a-2f24-4426-9292-5a394fe09d3d-frr-startup\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.874540 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/40f6778a-2f24-4426-9292-5a394fe09d3d-frr-conf\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.874644 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/40f6778a-2f24-4426-9292-5a394fe09d3d-frr-sockets\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.878643 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0042167e-002b-4fa7-8ff3-161e1326fae1-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qk849\" (UID: \"0042167e-002b-4fa7-8ff3-161e1326fae1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qk849" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.908781 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgsp8\" (UniqueName: \"kubernetes.io/projected/0042167e-002b-4fa7-8ff3-161e1326fae1-kube-api-access-dgsp8\") pod \"frr-k8s-webhook-server-7df86c4f6c-qk849\" (UID: \"0042167e-002b-4fa7-8ff3-161e1326fae1\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qk849" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.921789 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn2kr\" (UniqueName: \"kubernetes.io/projected/40f6778a-2f24-4426-9292-5a394fe09d3d-kube-api-access-gn2kr\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.973134 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f582a362-710c-4f62-988c-e09f4f34da10-memberlist\") pod \"speaker-wkrfv\" (UID: \"f582a362-710c-4f62-988c-e09f4f34da10\") " pod="metallb-system/speaker-wkrfv" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.973184 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbnrk\" (UniqueName: \"kubernetes.io/projected/4cf87a70-49c9-4cb3-a2be-afd200ca892b-kube-api-access-cbnrk\") pod \"controller-6968d8fdc4-lsbrt\" (UID: \"4cf87a70-49c9-4cb3-a2be-afd200ca892b\") " pod="metallb-system/controller-6968d8fdc4-lsbrt" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.973208 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f582a362-710c-4f62-988c-e09f4f34da10-metrics-certs\") pod \"speaker-wkrfv\" (UID: \"f582a362-710c-4f62-988c-e09f4f34da10\") " pod="metallb-system/speaker-wkrfv" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.973223 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4h7w\" (UniqueName: \"kubernetes.io/projected/f582a362-710c-4f62-988c-e09f4f34da10-kube-api-access-j4h7w\") pod \"speaker-wkrfv\" (UID: \"f582a362-710c-4f62-988c-e09f4f34da10\") " pod="metallb-system/speaker-wkrfv" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.973278 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4cf87a70-49c9-4cb3-a2be-afd200ca892b-cert\") pod \"controller-6968d8fdc4-lsbrt\" (UID: \"4cf87a70-49c9-4cb3-a2be-afd200ca892b\") " pod="metallb-system/controller-6968d8fdc4-lsbrt" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.973315 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f582a362-710c-4f62-988c-e09f4f34da10-metallb-excludel2\") pod \"speaker-wkrfv\" (UID: \"f582a362-710c-4f62-988c-e09f4f34da10\") " pod="metallb-system/speaker-wkrfv" Jan 28 09:57:00 crc kubenswrapper[4735]: I0128 09:57:00.973341 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4cf87a70-49c9-4cb3-a2be-afd200ca892b-metrics-certs\") pod \"controller-6968d8fdc4-lsbrt\" (UID: \"4cf87a70-49c9-4cb3-a2be-afd200ca892b\") " pod="metallb-system/controller-6968d8fdc4-lsbrt" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.055180 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qk849" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.074766 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f582a362-710c-4f62-988c-e09f4f34da10-memberlist\") pod \"speaker-wkrfv\" (UID: \"f582a362-710c-4f62-988c-e09f4f34da10\") " pod="metallb-system/speaker-wkrfv" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.074817 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbnrk\" (UniqueName: \"kubernetes.io/projected/4cf87a70-49c9-4cb3-a2be-afd200ca892b-kube-api-access-cbnrk\") pod \"controller-6968d8fdc4-lsbrt\" (UID: \"4cf87a70-49c9-4cb3-a2be-afd200ca892b\") " pod="metallb-system/controller-6968d8fdc4-lsbrt" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.074842 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f582a362-710c-4f62-988c-e09f4f34da10-metrics-certs\") pod \"speaker-wkrfv\" (UID: \"f582a362-710c-4f62-988c-e09f4f34da10\") " pod="metallb-system/speaker-wkrfv" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.074875 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4h7w\" (UniqueName: \"kubernetes.io/projected/f582a362-710c-4f62-988c-e09f4f34da10-kube-api-access-j4h7w\") pod \"speaker-wkrfv\" (UID: \"f582a362-710c-4f62-988c-e09f4f34da10\") " pod="metallb-system/speaker-wkrfv" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.074895 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4cf87a70-49c9-4cb3-a2be-afd200ca892b-cert\") pod \"controller-6968d8fdc4-lsbrt\" (UID: \"4cf87a70-49c9-4cb3-a2be-afd200ca892b\") " pod="metallb-system/controller-6968d8fdc4-lsbrt" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.075430 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f582a362-710c-4f62-988c-e09f4f34da10-metallb-excludel2\") pod \"speaker-wkrfv\" (UID: \"f582a362-710c-4f62-988c-e09f4f34da10\") " pod="metallb-system/speaker-wkrfv" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.075465 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4cf87a70-49c9-4cb3-a2be-afd200ca892b-metrics-certs\") pod \"controller-6968d8fdc4-lsbrt\" (UID: \"4cf87a70-49c9-4cb3-a2be-afd200ca892b\") " pod="metallb-system/controller-6968d8fdc4-lsbrt" Jan 28 09:57:01 crc kubenswrapper[4735]: E0128 09:57:01.074934 4735 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 09:57:01 crc kubenswrapper[4735]: E0128 09:57:01.075575 4735 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 28 09:57:01 crc kubenswrapper[4735]: E0128 09:57:01.075580 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f582a362-710c-4f62-988c-e09f4f34da10-memberlist podName:f582a362-710c-4f62-988c-e09f4f34da10 nodeName:}" failed. No retries permitted until 2026-01-28 09:57:01.57556147 +0000 UTC m=+874.725225843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f582a362-710c-4f62-988c-e09f4f34da10-memberlist") pod "speaker-wkrfv" (UID: "f582a362-710c-4f62-988c-e09f4f34da10") : secret "metallb-memberlist" not found Jan 28 09:57:01 crc kubenswrapper[4735]: E0128 09:57:01.075603 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cf87a70-49c9-4cb3-a2be-afd200ca892b-metrics-certs podName:4cf87a70-49c9-4cb3-a2be-afd200ca892b nodeName:}" failed. No retries permitted until 2026-01-28 09:57:01.575594491 +0000 UTC m=+874.725258864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4cf87a70-49c9-4cb3-a2be-afd200ca892b-metrics-certs") pod "controller-6968d8fdc4-lsbrt" (UID: "4cf87a70-49c9-4cb3-a2be-afd200ca892b") : secret "controller-certs-secret" not found Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.076222 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f582a362-710c-4f62-988c-e09f4f34da10-metallb-excludel2\") pod \"speaker-wkrfv\" (UID: \"f582a362-710c-4f62-988c-e09f4f34da10\") " pod="metallb-system/speaker-wkrfv" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.080683 4735 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.081653 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f582a362-710c-4f62-988c-e09f4f34da10-metrics-certs\") pod \"speaker-wkrfv\" (UID: \"f582a362-710c-4f62-988c-e09f4f34da10\") " pod="metallb-system/speaker-wkrfv" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.105581 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbnrk\" (UniqueName: \"kubernetes.io/projected/4cf87a70-49c9-4cb3-a2be-afd200ca892b-kube-api-access-cbnrk\") pod \"controller-6968d8fdc4-lsbrt\" (UID: \"4cf87a70-49c9-4cb3-a2be-afd200ca892b\") " pod="metallb-system/controller-6968d8fdc4-lsbrt" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.107157 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4cf87a70-49c9-4cb3-a2be-afd200ca892b-cert\") pod \"controller-6968d8fdc4-lsbrt\" (UID: \"4cf87a70-49c9-4cb3-a2be-afd200ca892b\") " pod="metallb-system/controller-6968d8fdc4-lsbrt" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.113378 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4h7w\" (UniqueName: \"kubernetes.io/projected/f582a362-710c-4f62-988c-e09f4f34da10-kube-api-access-j4h7w\") pod \"speaker-wkrfv\" (UID: \"f582a362-710c-4f62-988c-e09f4f34da10\") " pod="metallb-system/speaker-wkrfv" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.379450 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40f6778a-2f24-4426-9292-5a394fe09d3d-metrics-certs\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.383500 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40f6778a-2f24-4426-9292-5a394fe09d3d-metrics-certs\") pod \"frr-k8s-g8zjj\" (UID: \"40f6778a-2f24-4426-9292-5a394fe09d3d\") " pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.537265 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qk849"] Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.582408 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f582a362-710c-4f62-988c-e09f4f34da10-memberlist\") pod \"speaker-wkrfv\" (UID: \"f582a362-710c-4f62-988c-e09f4f34da10\") " pod="metallb-system/speaker-wkrfv" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.582510 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4cf87a70-49c9-4cb3-a2be-afd200ca892b-metrics-certs\") pod \"controller-6968d8fdc4-lsbrt\" (UID: \"4cf87a70-49c9-4cb3-a2be-afd200ca892b\") " pod="metallb-system/controller-6968d8fdc4-lsbrt" Jan 28 09:57:01 crc kubenswrapper[4735]: E0128 09:57:01.582512 4735 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 09:57:01 crc kubenswrapper[4735]: E0128 09:57:01.582593 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f582a362-710c-4f62-988c-e09f4f34da10-memberlist podName:f582a362-710c-4f62-988c-e09f4f34da10 nodeName:}" failed. No retries permitted until 2026-01-28 09:57:02.582572677 +0000 UTC m=+875.732237110 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f582a362-710c-4f62-988c-e09f4f34da10-memberlist") pod "speaker-wkrfv" (UID: "f582a362-710c-4f62-988c-e09f4f34da10") : secret "metallb-memberlist" not found Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.586192 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4cf87a70-49c9-4cb3-a2be-afd200ca892b-metrics-certs\") pod \"controller-6968d8fdc4-lsbrt\" (UID: \"4cf87a70-49c9-4cb3-a2be-afd200ca892b\") " pod="metallb-system/controller-6968d8fdc4-lsbrt" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.644825 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.756085 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-lsbrt" Jan 28 09:57:01 crc kubenswrapper[4735]: I0128 09:57:01.981994 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-lsbrt"] Jan 28 09:57:01 crc kubenswrapper[4735]: W0128 09:57:01.990162 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cf87a70_49c9_4cb3_a2be_afd200ca892b.slice/crio-26309e6892e2664a5f4d201fd495532140d468840e03257d684a5a6de2b1d84e WatchSource:0}: Error finding container 26309e6892e2664a5f4d201fd495532140d468840e03257d684a5a6de2b1d84e: Status 404 returned error can't find the container with id 26309e6892e2664a5f4d201fd495532140d468840e03257d684a5a6de2b1d84e Jan 28 09:57:02 crc kubenswrapper[4735]: I0128 09:57:02.541042 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g8zjj" event={"ID":"40f6778a-2f24-4426-9292-5a394fe09d3d","Type":"ContainerStarted","Data":"18c109c3727fb1342d814c8772727d72b331d93d91194ab226b5612409c53c44"} Jan 28 09:57:02 crc kubenswrapper[4735]: I0128 09:57:02.543025 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lsbrt" event={"ID":"4cf87a70-49c9-4cb3-a2be-afd200ca892b","Type":"ContainerStarted","Data":"6090f78e4409f0a6c612e0e5343ac842b3d630b06372c5bf0735d49416eb9a87"} Jan 28 09:57:02 crc kubenswrapper[4735]: I0128 09:57:02.543080 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lsbrt" event={"ID":"4cf87a70-49c9-4cb3-a2be-afd200ca892b","Type":"ContainerStarted","Data":"26309e6892e2664a5f4d201fd495532140d468840e03257d684a5a6de2b1d84e"} Jan 28 09:57:02 crc kubenswrapper[4735]: I0128 09:57:02.544060 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qk849" event={"ID":"0042167e-002b-4fa7-8ff3-161e1326fae1","Type":"ContainerStarted","Data":"edf6df98badbbe7220239fa7a536ff2ecef30ebf7f7bd5872d48eeaea36accd5"} Jan 28 09:57:02 crc kubenswrapper[4735]: I0128 09:57:02.594665 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f582a362-710c-4f62-988c-e09f4f34da10-memberlist\") pod \"speaker-wkrfv\" (UID: \"f582a362-710c-4f62-988c-e09f4f34da10\") " pod="metallb-system/speaker-wkrfv" Jan 28 09:57:02 crc kubenswrapper[4735]: I0128 09:57:02.599940 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f582a362-710c-4f62-988c-e09f4f34da10-memberlist\") pod \"speaker-wkrfv\" (UID: \"f582a362-710c-4f62-988c-e09f4f34da10\") " pod="metallb-system/speaker-wkrfv" Jan 28 09:57:02 crc kubenswrapper[4735]: I0128 09:57:02.707905 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wkrfv" Jan 28 09:57:02 crc kubenswrapper[4735]: W0128 09:57:02.724721 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf582a362_710c_4f62_988c_e09f4f34da10.slice/crio-74cca5a9e51be36b487254c2a65e3a9390283837f61f92b745c72ad6aee73a12 WatchSource:0}: Error finding container 74cca5a9e51be36b487254c2a65e3a9390283837f61f92b745c72ad6aee73a12: Status 404 returned error can't find the container with id 74cca5a9e51be36b487254c2a65e3a9390283837f61f92b745c72ad6aee73a12 Jan 28 09:57:03 crc kubenswrapper[4735]: I0128 09:57:03.555350 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lsbrt" event={"ID":"4cf87a70-49c9-4cb3-a2be-afd200ca892b","Type":"ContainerStarted","Data":"7ea9bf715838ad77253c0bb02254d286e817097528f0f3f09be6049771137053"} Jan 28 09:57:03 crc kubenswrapper[4735]: I0128 09:57:03.555739 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-lsbrt" Jan 28 09:57:03 crc kubenswrapper[4735]: I0128 09:57:03.561679 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wkrfv" event={"ID":"f582a362-710c-4f62-988c-e09f4f34da10","Type":"ContainerStarted","Data":"41170ff739e3f6510bac2434d0ea7d2f02f81d933617c8d05972be038aefc289"} Jan 28 09:57:03 crc kubenswrapper[4735]: I0128 09:57:03.561731 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wkrfv" event={"ID":"f582a362-710c-4f62-988c-e09f4f34da10","Type":"ContainerStarted","Data":"f0d5fd73c0df290aa5fbef95c30f4f66f1729492060064780a4f2306b07e45ee"} Jan 28 09:57:03 crc kubenswrapper[4735]: I0128 09:57:03.561799 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wkrfv" event={"ID":"f582a362-710c-4f62-988c-e09f4f34da10","Type":"ContainerStarted","Data":"74cca5a9e51be36b487254c2a65e3a9390283837f61f92b745c72ad6aee73a12"} Jan 28 09:57:03 crc kubenswrapper[4735]: I0128 09:57:03.561972 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wkrfv" Jan 28 09:57:03 crc kubenswrapper[4735]: I0128 09:57:03.588599 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-lsbrt" podStartSLOduration=3.588578874 podStartE2EDuration="3.588578874s" podCreationTimestamp="2026-01-28 09:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:57:03.586239655 +0000 UTC m=+876.735904028" watchObservedRunningTime="2026-01-28 09:57:03.588578874 +0000 UTC m=+876.738243247" Jan 28 09:57:03 crc kubenswrapper[4735]: I0128 09:57:03.612093 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-wkrfv" podStartSLOduration=3.612071081 podStartE2EDuration="3.612071081s" podCreationTimestamp="2026-01-28 09:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:57:03.608986884 +0000 UTC m=+876.758651277" watchObservedRunningTime="2026-01-28 09:57:03.612071081 +0000 UTC m=+876.761735454" Jan 28 09:57:09 crc kubenswrapper[4735]: I0128 09:57:09.604402 4735 generic.go:334] "Generic (PLEG): container finished" podID="40f6778a-2f24-4426-9292-5a394fe09d3d" containerID="83043ce859f4e4be8668f57cbb788500e8aab68916ebcf72eb6c2fdaafd78601" exitCode=0 Jan 28 09:57:09 crc kubenswrapper[4735]: I0128 09:57:09.604510 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g8zjj" event={"ID":"40f6778a-2f24-4426-9292-5a394fe09d3d","Type":"ContainerDied","Data":"83043ce859f4e4be8668f57cbb788500e8aab68916ebcf72eb6c2fdaafd78601"} Jan 28 09:57:09 crc kubenswrapper[4735]: I0128 09:57:09.606389 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qk849" event={"ID":"0042167e-002b-4fa7-8ff3-161e1326fae1","Type":"ContainerStarted","Data":"65ba1f38e6dba51f68dc9c53314be95330a2fa396cd20575a537876158e49b9b"} Jan 28 09:57:09 crc kubenswrapper[4735]: I0128 09:57:09.606714 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qk849" Jan 28 09:57:09 crc kubenswrapper[4735]: I0128 09:57:09.639837 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qk849" podStartSLOduration=1.739346926 podStartE2EDuration="9.639817754s" podCreationTimestamp="2026-01-28 09:57:00 +0000 UTC" firstStartedPulling="2026-01-28 09:57:01.540627698 +0000 UTC m=+874.690292081" lastFinishedPulling="2026-01-28 09:57:09.441098526 +0000 UTC m=+882.590762909" observedRunningTime="2026-01-28 09:57:09.639662661 +0000 UTC m=+882.789327034" watchObservedRunningTime="2026-01-28 09:57:09.639817754 +0000 UTC m=+882.789482127" Jan 28 09:57:10 crc kubenswrapper[4735]: I0128 09:57:10.617607 4735 generic.go:334] "Generic (PLEG): container finished" podID="40f6778a-2f24-4426-9292-5a394fe09d3d" containerID="eee7b589cb2ade25ae5965f610fe7cf8657e05e0b30e5eea568809a095dbf452" exitCode=0 Jan 28 09:57:10 crc kubenswrapper[4735]: I0128 09:57:10.617698 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g8zjj" event={"ID":"40f6778a-2f24-4426-9292-5a394fe09d3d","Type":"ContainerDied","Data":"eee7b589cb2ade25ae5965f610fe7cf8657e05e0b30e5eea568809a095dbf452"} Jan 28 09:57:11 crc kubenswrapper[4735]: I0128 09:57:11.623696 4735 generic.go:334] "Generic (PLEG): container finished" podID="40f6778a-2f24-4426-9292-5a394fe09d3d" containerID="749eece06047cb37f65c60ad91dbf9f0d1094bc75961ade779bdb1e0b55c406e" exitCode=0 Jan 28 09:57:11 crc kubenswrapper[4735]: I0128 09:57:11.623789 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g8zjj" event={"ID":"40f6778a-2f24-4426-9292-5a394fe09d3d","Type":"ContainerDied","Data":"749eece06047cb37f65c60ad91dbf9f0d1094bc75961ade779bdb1e0b55c406e"} Jan 28 09:57:12 crc kubenswrapper[4735]: I0128 09:57:12.632356 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g8zjj" event={"ID":"40f6778a-2f24-4426-9292-5a394fe09d3d","Type":"ContainerStarted","Data":"361d77e4bad0d6fa5927a921e71e5816c93f7bcd89b8bbd35e3c80db45868587"} Jan 28 09:57:12 crc kubenswrapper[4735]: I0128 09:57:12.632696 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g8zjj" event={"ID":"40f6778a-2f24-4426-9292-5a394fe09d3d","Type":"ContainerStarted","Data":"529b97c62501da016eb9410643853d0f2a79ae9bb63a252e7a30f4bc616838a9"} Jan 28 09:57:12 crc kubenswrapper[4735]: I0128 09:57:12.632715 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:12 crc kubenswrapper[4735]: I0128 09:57:12.632728 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g8zjj" event={"ID":"40f6778a-2f24-4426-9292-5a394fe09d3d","Type":"ContainerStarted","Data":"fd5956070b7a3b3dac0d067266cd392b21354907b3bd5ed1b43bbd0d3afdff4a"} Jan 28 09:57:12 crc kubenswrapper[4735]: I0128 09:57:12.632738 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g8zjj" event={"ID":"40f6778a-2f24-4426-9292-5a394fe09d3d","Type":"ContainerStarted","Data":"08c8ed7e8ec6b7c5dda906002f6dc4920ad676164e5b735be6e25b627550dab5"} Jan 28 09:57:12 crc kubenswrapper[4735]: I0128 09:57:12.632765 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g8zjj" event={"ID":"40f6778a-2f24-4426-9292-5a394fe09d3d","Type":"ContainerStarted","Data":"77adbc211704a5b9a75fa767b7c7d89cd039e9a75b310378f392bb00fbb066e4"} Jan 28 09:57:12 crc kubenswrapper[4735]: I0128 09:57:12.632776 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g8zjj" event={"ID":"40f6778a-2f24-4426-9292-5a394fe09d3d","Type":"ContainerStarted","Data":"f89390463cf2a07b205f54959468a32e14980683fde0f18d1feba553b283b34e"} Jan 28 09:57:12 crc kubenswrapper[4735]: I0128 09:57:12.653420 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-g8zjj" podStartSLOduration=5.10228981 podStartE2EDuration="12.653404162s" podCreationTimestamp="2026-01-28 09:57:00 +0000 UTC" firstStartedPulling="2026-01-28 09:57:01.904294501 +0000 UTC m=+875.053958874" lastFinishedPulling="2026-01-28 09:57:09.455408843 +0000 UTC m=+882.605073226" observedRunningTime="2026-01-28 09:57:12.64972387 +0000 UTC m=+885.799388263" watchObservedRunningTime="2026-01-28 09:57:12.653404162 +0000 UTC m=+885.803068535" Jan 28 09:57:12 crc kubenswrapper[4735]: I0128 09:57:12.711796 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wkrfv" Jan 28 09:57:15 crc kubenswrapper[4735]: I0128 09:57:15.879455 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-b5llz"] Jan 28 09:57:15 crc kubenswrapper[4735]: I0128 09:57:15.880725 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b5llz" Jan 28 09:57:15 crc kubenswrapper[4735]: I0128 09:57:15.882772 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 28 09:57:15 crc kubenswrapper[4735]: I0128 09:57:15.883024 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 28 09:57:15 crc kubenswrapper[4735]: I0128 09:57:15.884627 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-74pjh" Jan 28 09:57:15 crc kubenswrapper[4735]: I0128 09:57:15.888264 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-b5llz"] Jan 28 09:57:16 crc kubenswrapper[4735]: I0128 09:57:16.013136 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzdxx\" (UniqueName: \"kubernetes.io/projected/9905518f-456d-45d5-bb3f-8bbc12bc4d59-kube-api-access-lzdxx\") pod \"openstack-operator-index-b5llz\" (UID: \"9905518f-456d-45d5-bb3f-8bbc12bc4d59\") " pod="openstack-operators/openstack-operator-index-b5llz" Jan 28 09:57:16 crc kubenswrapper[4735]: I0128 09:57:16.114514 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzdxx\" (UniqueName: \"kubernetes.io/projected/9905518f-456d-45d5-bb3f-8bbc12bc4d59-kube-api-access-lzdxx\") pod \"openstack-operator-index-b5llz\" (UID: \"9905518f-456d-45d5-bb3f-8bbc12bc4d59\") " pod="openstack-operators/openstack-operator-index-b5llz" Jan 28 09:57:16 crc kubenswrapper[4735]: I0128 09:57:16.132467 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzdxx\" (UniqueName: \"kubernetes.io/projected/9905518f-456d-45d5-bb3f-8bbc12bc4d59-kube-api-access-lzdxx\") pod \"openstack-operator-index-b5llz\" (UID: \"9905518f-456d-45d5-bb3f-8bbc12bc4d59\") " pod="openstack-operators/openstack-operator-index-b5llz" Jan 28 09:57:16 crc kubenswrapper[4735]: I0128 09:57:16.204028 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b5llz" Jan 28 09:57:16 crc kubenswrapper[4735]: I0128 09:57:16.645816 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:16 crc kubenswrapper[4735]: I0128 09:57:16.692341 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:16 crc kubenswrapper[4735]: I0128 09:57:16.715841 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-b5llz"] Jan 28 09:57:17 crc kubenswrapper[4735]: I0128 09:57:17.668945 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b5llz" event={"ID":"9905518f-456d-45d5-bb3f-8bbc12bc4d59","Type":"ContainerStarted","Data":"5899e60053e28badf4d0d091612ec03d76d143cca95a671cc0f660f82d9389a5"} Jan 28 09:57:19 crc kubenswrapper[4735]: I0128 09:57:19.248281 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-b5llz"] Jan 28 09:57:19 crc kubenswrapper[4735]: I0128 09:57:19.853791 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nmkt5"] Jan 28 09:57:19 crc kubenswrapper[4735]: I0128 09:57:19.855019 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nmkt5" Jan 28 09:57:19 crc kubenswrapper[4735]: I0128 09:57:19.868901 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nmkt5"] Jan 28 09:57:19 crc kubenswrapper[4735]: I0128 09:57:19.962899 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kksrl\" (UniqueName: \"kubernetes.io/projected/f6eacb6b-9066-44c5-a483-02274a4e3884-kube-api-access-kksrl\") pod \"openstack-operator-index-nmkt5\" (UID: \"f6eacb6b-9066-44c5-a483-02274a4e3884\") " pod="openstack-operators/openstack-operator-index-nmkt5" Jan 28 09:57:20 crc kubenswrapper[4735]: I0128 09:57:20.064505 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kksrl\" (UniqueName: \"kubernetes.io/projected/f6eacb6b-9066-44c5-a483-02274a4e3884-kube-api-access-kksrl\") pod \"openstack-operator-index-nmkt5\" (UID: \"f6eacb6b-9066-44c5-a483-02274a4e3884\") " pod="openstack-operators/openstack-operator-index-nmkt5" Jan 28 09:57:20 crc kubenswrapper[4735]: I0128 09:57:20.082737 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kksrl\" (UniqueName: \"kubernetes.io/projected/f6eacb6b-9066-44c5-a483-02274a4e3884-kube-api-access-kksrl\") pod \"openstack-operator-index-nmkt5\" (UID: \"f6eacb6b-9066-44c5-a483-02274a4e3884\") " pod="openstack-operators/openstack-operator-index-nmkt5" Jan 28 09:57:20 crc kubenswrapper[4735]: I0128 09:57:20.183870 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nmkt5" Jan 28 09:57:20 crc kubenswrapper[4735]: I0128 09:57:20.562060 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nmkt5"] Jan 28 09:57:20 crc kubenswrapper[4735]: I0128 09:57:20.687392 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nmkt5" event={"ID":"f6eacb6b-9066-44c5-a483-02274a4e3884","Type":"ContainerStarted","Data":"d49a48e4bf336436ab9a7f7ae247e9849e4004daf647e505b55d90c8dfd89261"} Jan 28 09:57:20 crc kubenswrapper[4735]: I0128 09:57:20.688732 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b5llz" event={"ID":"9905518f-456d-45d5-bb3f-8bbc12bc4d59","Type":"ContainerStarted","Data":"8ff308d5611811cafb4cc57540c9d7373e811e1fd5b259647db42e434ef8c014"} Jan 28 09:57:20 crc kubenswrapper[4735]: I0128 09:57:20.688848 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-b5llz" podUID="9905518f-456d-45d5-bb3f-8bbc12bc4d59" containerName="registry-server" containerID="cri-o://8ff308d5611811cafb4cc57540c9d7373e811e1fd5b259647db42e434ef8c014" gracePeriod=2 Jan 28 09:57:20 crc kubenswrapper[4735]: I0128 09:57:20.706849 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-b5llz" podStartSLOduration=2.542313241 podStartE2EDuration="5.706809923s" podCreationTimestamp="2026-01-28 09:57:15 +0000 UTC" firstStartedPulling="2026-01-28 09:57:16.732053051 +0000 UTC m=+889.881717424" lastFinishedPulling="2026-01-28 09:57:19.896549733 +0000 UTC m=+893.046214106" observedRunningTime="2026-01-28 09:57:20.703154412 +0000 UTC m=+893.852818775" watchObservedRunningTime="2026-01-28 09:57:20.706809923 +0000 UTC m=+893.856474286" Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.064682 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qk849" Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.084879 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b5llz" Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.179106 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzdxx\" (UniqueName: \"kubernetes.io/projected/9905518f-456d-45d5-bb3f-8bbc12bc4d59-kube-api-access-lzdxx\") pod \"9905518f-456d-45d5-bb3f-8bbc12bc4d59\" (UID: \"9905518f-456d-45d5-bb3f-8bbc12bc4d59\") " Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.188899 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9905518f-456d-45d5-bb3f-8bbc12bc4d59-kube-api-access-lzdxx" (OuterVolumeSpecName: "kube-api-access-lzdxx") pod "9905518f-456d-45d5-bb3f-8bbc12bc4d59" (UID: "9905518f-456d-45d5-bb3f-8bbc12bc4d59"). InnerVolumeSpecName "kube-api-access-lzdxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.280586 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzdxx\" (UniqueName: \"kubernetes.io/projected/9905518f-456d-45d5-bb3f-8bbc12bc4d59-kube-api-access-lzdxx\") on node \"crc\" DevicePath \"\"" Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.649692 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-g8zjj" Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.697520 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nmkt5" event={"ID":"f6eacb6b-9066-44c5-a483-02274a4e3884","Type":"ContainerStarted","Data":"44688b27704e1f0ff307e727827bf492a25f22f73873036d903754417f10ae55"} Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.699505 4735 generic.go:334] "Generic (PLEG): container finished" podID="9905518f-456d-45d5-bb3f-8bbc12bc4d59" containerID="8ff308d5611811cafb4cc57540c9d7373e811e1fd5b259647db42e434ef8c014" exitCode=0 Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.699527 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b5llz" Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.699536 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b5llz" event={"ID":"9905518f-456d-45d5-bb3f-8bbc12bc4d59","Type":"ContainerDied","Data":"8ff308d5611811cafb4cc57540c9d7373e811e1fd5b259647db42e434ef8c014"} Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.699909 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b5llz" event={"ID":"9905518f-456d-45d5-bb3f-8bbc12bc4d59","Type":"ContainerDied","Data":"5899e60053e28badf4d0d091612ec03d76d143cca95a671cc0f660f82d9389a5"} Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.699954 4735 scope.go:117] "RemoveContainer" containerID="8ff308d5611811cafb4cc57540c9d7373e811e1fd5b259647db42e434ef8c014" Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.719251 4735 scope.go:117] "RemoveContainer" containerID="8ff308d5611811cafb4cc57540c9d7373e811e1fd5b259647db42e434ef8c014" Jan 28 09:57:21 crc kubenswrapper[4735]: E0128 09:57:21.719563 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ff308d5611811cafb4cc57540c9d7373e811e1fd5b259647db42e434ef8c014\": container with ID starting with 8ff308d5611811cafb4cc57540c9d7373e811e1fd5b259647db42e434ef8c014 not found: ID does not exist" containerID="8ff308d5611811cafb4cc57540c9d7373e811e1fd5b259647db42e434ef8c014" Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.719595 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ff308d5611811cafb4cc57540c9d7373e811e1fd5b259647db42e434ef8c014"} err="failed to get container status \"8ff308d5611811cafb4cc57540c9d7373e811e1fd5b259647db42e434ef8c014\": rpc error: code = NotFound desc = could not find container \"8ff308d5611811cafb4cc57540c9d7373e811e1fd5b259647db42e434ef8c014\": container with ID starting with 8ff308d5611811cafb4cc57540c9d7373e811e1fd5b259647db42e434ef8c014 not found: ID does not exist" Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.727346 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nmkt5" podStartSLOduration=2.675592697 podStartE2EDuration="2.72733321s" podCreationTimestamp="2026-01-28 09:57:19 +0000 UTC" firstStartedPulling="2026-01-28 09:57:20.577193723 +0000 UTC m=+893.726858096" lastFinishedPulling="2026-01-28 09:57:20.628934216 +0000 UTC m=+893.778598609" observedRunningTime="2026-01-28 09:57:21.717109454 +0000 UTC m=+894.866773827" watchObservedRunningTime="2026-01-28 09:57:21.72733321 +0000 UTC m=+894.876997583" Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.729590 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-b5llz"] Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.733151 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-b5llz"] Jan 28 09:57:21 crc kubenswrapper[4735]: I0128 09:57:21.760306 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-lsbrt" Jan 28 09:57:23 crc kubenswrapper[4735]: I0128 09:57:23.521656 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9905518f-456d-45d5-bb3f-8bbc12bc4d59" path="/var/lib/kubelet/pods/9905518f-456d-45d5-bb3f-8bbc12bc4d59/volumes" Jan 28 09:57:30 crc kubenswrapper[4735]: I0128 09:57:30.184901 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nmkt5" Jan 28 09:57:30 crc kubenswrapper[4735]: I0128 09:57:30.185497 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-nmkt5" Jan 28 09:57:30 crc kubenswrapper[4735]: I0128 09:57:30.220645 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-nmkt5" Jan 28 09:57:30 crc kubenswrapper[4735]: I0128 09:57:30.775543 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-nmkt5" Jan 28 09:57:31 crc kubenswrapper[4735]: I0128 09:57:31.894150 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf"] Jan 28 09:57:31 crc kubenswrapper[4735]: E0128 09:57:31.894786 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9905518f-456d-45d5-bb3f-8bbc12bc4d59" containerName="registry-server" Jan 28 09:57:31 crc kubenswrapper[4735]: I0128 09:57:31.894803 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9905518f-456d-45d5-bb3f-8bbc12bc4d59" containerName="registry-server" Jan 28 09:57:31 crc kubenswrapper[4735]: I0128 09:57:31.894975 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9905518f-456d-45d5-bb3f-8bbc12bc4d59" containerName="registry-server" Jan 28 09:57:31 crc kubenswrapper[4735]: I0128 09:57:31.896236 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" Jan 28 09:57:31 crc kubenswrapper[4735]: I0128 09:57:31.899119 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-54dr9" Jan 28 09:57:31 crc kubenswrapper[4735]: I0128 09:57:31.911662 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf"] Jan 28 09:57:32 crc kubenswrapper[4735]: I0128 09:57:32.034870 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ec63835-f891-4098-a2b0-348948b4a3ec-util\") pod \"143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf\" (UID: \"5ec63835-f891-4098-a2b0-348948b4a3ec\") " pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" Jan 28 09:57:32 crc kubenswrapper[4735]: I0128 09:57:32.035106 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ec63835-f891-4098-a2b0-348948b4a3ec-bundle\") pod \"143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf\" (UID: \"5ec63835-f891-4098-a2b0-348948b4a3ec\") " pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" Jan 28 09:57:32 crc kubenswrapper[4735]: I0128 09:57:32.035390 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vf28\" (UniqueName: \"kubernetes.io/projected/5ec63835-f891-4098-a2b0-348948b4a3ec-kube-api-access-2vf28\") pod \"143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf\" (UID: \"5ec63835-f891-4098-a2b0-348948b4a3ec\") " pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" Jan 28 09:57:32 crc kubenswrapper[4735]: I0128 09:57:32.137294 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ec63835-f891-4098-a2b0-348948b4a3ec-bundle\") pod \"143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf\" (UID: \"5ec63835-f891-4098-a2b0-348948b4a3ec\") " pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" Jan 28 09:57:32 crc kubenswrapper[4735]: I0128 09:57:32.137415 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vf28\" (UniqueName: \"kubernetes.io/projected/5ec63835-f891-4098-a2b0-348948b4a3ec-kube-api-access-2vf28\") pod \"143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf\" (UID: \"5ec63835-f891-4098-a2b0-348948b4a3ec\") " pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" Jan 28 09:57:32 crc kubenswrapper[4735]: I0128 09:57:32.137482 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ec63835-f891-4098-a2b0-348948b4a3ec-util\") pod \"143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf\" (UID: \"5ec63835-f891-4098-a2b0-348948b4a3ec\") " pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" Jan 28 09:57:32 crc kubenswrapper[4735]: I0128 09:57:32.138069 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ec63835-f891-4098-a2b0-348948b4a3ec-bundle\") pod \"143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf\" (UID: \"5ec63835-f891-4098-a2b0-348948b4a3ec\") " pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" Jan 28 09:57:32 crc kubenswrapper[4735]: I0128 09:57:32.138112 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ec63835-f891-4098-a2b0-348948b4a3ec-util\") pod \"143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf\" (UID: \"5ec63835-f891-4098-a2b0-348948b4a3ec\") " pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" Jan 28 09:57:32 crc kubenswrapper[4735]: I0128 09:57:32.155140 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vf28\" (UniqueName: \"kubernetes.io/projected/5ec63835-f891-4098-a2b0-348948b4a3ec-kube-api-access-2vf28\") pod \"143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf\" (UID: \"5ec63835-f891-4098-a2b0-348948b4a3ec\") " pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" Jan 28 09:57:32 crc kubenswrapper[4735]: I0128 09:57:32.269253 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" Jan 28 09:57:32 crc kubenswrapper[4735]: I0128 09:57:32.715024 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf"] Jan 28 09:57:32 crc kubenswrapper[4735]: I0128 09:57:32.760965 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" event={"ID":"5ec63835-f891-4098-a2b0-348948b4a3ec","Type":"ContainerStarted","Data":"443ad910a5097044bf360fe8e15dcd81249d5d5df378dfc57f1384175dbd101b"} Jan 28 09:57:33 crc kubenswrapper[4735]: I0128 09:57:33.771834 4735 generic.go:334] "Generic (PLEG): container finished" podID="5ec63835-f891-4098-a2b0-348948b4a3ec" containerID="8c7c037508bae9296a8ecc6378bf2cbbafa27b2635f1a3a8cfb1d8e7e5d07a12" exitCode=0 Jan 28 09:57:33 crc kubenswrapper[4735]: I0128 09:57:33.771912 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" event={"ID":"5ec63835-f891-4098-a2b0-348948b4a3ec","Type":"ContainerDied","Data":"8c7c037508bae9296a8ecc6378bf2cbbafa27b2635f1a3a8cfb1d8e7e5d07a12"} Jan 28 09:57:34 crc kubenswrapper[4735]: I0128 09:57:34.782520 4735 generic.go:334] "Generic (PLEG): container finished" podID="5ec63835-f891-4098-a2b0-348948b4a3ec" containerID="83e7b8fc6b3601a54648116d479115bb320ee0a730c8d8a8658c7c3bc7254c46" exitCode=0 Jan 28 09:57:34 crc kubenswrapper[4735]: I0128 09:57:34.782655 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" event={"ID":"5ec63835-f891-4098-a2b0-348948b4a3ec","Type":"ContainerDied","Data":"83e7b8fc6b3601a54648116d479115bb320ee0a730c8d8a8658c7c3bc7254c46"} Jan 28 09:57:35 crc kubenswrapper[4735]: I0128 09:57:35.792870 4735 generic.go:334] "Generic (PLEG): container finished" podID="5ec63835-f891-4098-a2b0-348948b4a3ec" containerID="3178ee8eb3b7b48ad3fb092bb7aa38962bc59fc371d9ab7ba46263054fe89701" exitCode=0 Jan 28 09:57:35 crc kubenswrapper[4735]: I0128 09:57:35.793006 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" event={"ID":"5ec63835-f891-4098-a2b0-348948b4a3ec","Type":"ContainerDied","Data":"3178ee8eb3b7b48ad3fb092bb7aa38962bc59fc371d9ab7ba46263054fe89701"} Jan 28 09:57:37 crc kubenswrapper[4735]: I0128 09:57:37.016536 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" Jan 28 09:57:37 crc kubenswrapper[4735]: I0128 09:57:37.106604 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vf28\" (UniqueName: \"kubernetes.io/projected/5ec63835-f891-4098-a2b0-348948b4a3ec-kube-api-access-2vf28\") pod \"5ec63835-f891-4098-a2b0-348948b4a3ec\" (UID: \"5ec63835-f891-4098-a2b0-348948b4a3ec\") " Jan 28 09:57:37 crc kubenswrapper[4735]: I0128 09:57:37.106714 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ec63835-f891-4098-a2b0-348948b4a3ec-bundle\") pod \"5ec63835-f891-4098-a2b0-348948b4a3ec\" (UID: \"5ec63835-f891-4098-a2b0-348948b4a3ec\") " Jan 28 09:57:37 crc kubenswrapper[4735]: I0128 09:57:37.106740 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ec63835-f891-4098-a2b0-348948b4a3ec-util\") pod \"5ec63835-f891-4098-a2b0-348948b4a3ec\" (UID: \"5ec63835-f891-4098-a2b0-348948b4a3ec\") " Jan 28 09:57:37 crc kubenswrapper[4735]: I0128 09:57:37.114970 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec63835-f891-4098-a2b0-348948b4a3ec-kube-api-access-2vf28" (OuterVolumeSpecName: "kube-api-access-2vf28") pod "5ec63835-f891-4098-a2b0-348948b4a3ec" (UID: "5ec63835-f891-4098-a2b0-348948b4a3ec"). InnerVolumeSpecName "kube-api-access-2vf28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:57:37 crc kubenswrapper[4735]: I0128 09:57:37.120186 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ec63835-f891-4098-a2b0-348948b4a3ec-bundle" (OuterVolumeSpecName: "bundle") pod "5ec63835-f891-4098-a2b0-348948b4a3ec" (UID: "5ec63835-f891-4098-a2b0-348948b4a3ec"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:57:37 crc kubenswrapper[4735]: I0128 09:57:37.120238 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ec63835-f891-4098-a2b0-348948b4a3ec-util" (OuterVolumeSpecName: "util") pod "5ec63835-f891-4098-a2b0-348948b4a3ec" (UID: "5ec63835-f891-4098-a2b0-348948b4a3ec"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:57:37 crc kubenswrapper[4735]: I0128 09:57:37.208348 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vf28\" (UniqueName: \"kubernetes.io/projected/5ec63835-f891-4098-a2b0-348948b4a3ec-kube-api-access-2vf28\") on node \"crc\" DevicePath \"\"" Jan 28 09:57:37 crc kubenswrapper[4735]: I0128 09:57:37.208420 4735 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5ec63835-f891-4098-a2b0-348948b4a3ec-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 09:57:37 crc kubenswrapper[4735]: I0128 09:57:37.208432 4735 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5ec63835-f891-4098-a2b0-348948b4a3ec-util\") on node \"crc\" DevicePath \"\"" Jan 28 09:57:37 crc kubenswrapper[4735]: I0128 09:57:37.811206 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" event={"ID":"5ec63835-f891-4098-a2b0-348948b4a3ec","Type":"ContainerDied","Data":"443ad910a5097044bf360fe8e15dcd81249d5d5df378dfc57f1384175dbd101b"} Jan 28 09:57:37 crc kubenswrapper[4735]: I0128 09:57:37.811249 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf" Jan 28 09:57:37 crc kubenswrapper[4735]: I0128 09:57:37.811275 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="443ad910a5097044bf360fe8e15dcd81249d5d5df378dfc57f1384175dbd101b" Jan 28 09:57:38 crc kubenswrapper[4735]: I0128 09:57:38.365465 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pnwx6"] Jan 28 09:57:38 crc kubenswrapper[4735]: E0128 09:57:38.365708 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ec63835-f891-4098-a2b0-348948b4a3ec" containerName="util" Jan 28 09:57:38 crc kubenswrapper[4735]: I0128 09:57:38.365729 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec63835-f891-4098-a2b0-348948b4a3ec" containerName="util" Jan 28 09:57:38 crc kubenswrapper[4735]: E0128 09:57:38.365769 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ec63835-f891-4098-a2b0-348948b4a3ec" containerName="extract" Jan 28 09:57:38 crc kubenswrapper[4735]: I0128 09:57:38.365778 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec63835-f891-4098-a2b0-348948b4a3ec" containerName="extract" Jan 28 09:57:38 crc kubenswrapper[4735]: E0128 09:57:38.365796 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ec63835-f891-4098-a2b0-348948b4a3ec" containerName="pull" Jan 28 09:57:38 crc kubenswrapper[4735]: I0128 09:57:38.365806 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec63835-f891-4098-a2b0-348948b4a3ec" containerName="pull" Jan 28 09:57:38 crc kubenswrapper[4735]: I0128 09:57:38.365931 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ec63835-f891-4098-a2b0-348948b4a3ec" containerName="extract" Jan 28 09:57:38 crc kubenswrapper[4735]: I0128 09:57:38.366718 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pnwx6" Jan 28 09:57:38 crc kubenswrapper[4735]: I0128 09:57:38.385291 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pnwx6"] Jan 28 09:57:38 crc kubenswrapper[4735]: I0128 09:57:38.448898 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dee427c-956c-419e-80c7-b52ec600dda8-utilities\") pod \"redhat-marketplace-pnwx6\" (UID: \"8dee427c-956c-419e-80c7-b52ec600dda8\") " pod="openshift-marketplace/redhat-marketplace-pnwx6" Jan 28 09:57:38 crc kubenswrapper[4735]: I0128 09:57:38.449034 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dee427c-956c-419e-80c7-b52ec600dda8-catalog-content\") pod \"redhat-marketplace-pnwx6\" (UID: \"8dee427c-956c-419e-80c7-b52ec600dda8\") " pod="openshift-marketplace/redhat-marketplace-pnwx6" Jan 28 09:57:38 crc kubenswrapper[4735]: I0128 09:57:38.449076 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8856\" (UniqueName: \"kubernetes.io/projected/8dee427c-956c-419e-80c7-b52ec600dda8-kube-api-access-f8856\") pod \"redhat-marketplace-pnwx6\" (UID: \"8dee427c-956c-419e-80c7-b52ec600dda8\") " pod="openshift-marketplace/redhat-marketplace-pnwx6" Jan 28 09:57:38 crc kubenswrapper[4735]: I0128 09:57:38.550422 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dee427c-956c-419e-80c7-b52ec600dda8-utilities\") pod \"redhat-marketplace-pnwx6\" (UID: \"8dee427c-956c-419e-80c7-b52ec600dda8\") " pod="openshift-marketplace/redhat-marketplace-pnwx6" Jan 28 09:57:38 crc kubenswrapper[4735]: I0128 09:57:38.550508 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dee427c-956c-419e-80c7-b52ec600dda8-catalog-content\") pod \"redhat-marketplace-pnwx6\" (UID: \"8dee427c-956c-419e-80c7-b52ec600dda8\") " pod="openshift-marketplace/redhat-marketplace-pnwx6" Jan 28 09:57:38 crc kubenswrapper[4735]: I0128 09:57:38.550549 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8856\" (UniqueName: \"kubernetes.io/projected/8dee427c-956c-419e-80c7-b52ec600dda8-kube-api-access-f8856\") pod \"redhat-marketplace-pnwx6\" (UID: \"8dee427c-956c-419e-80c7-b52ec600dda8\") " pod="openshift-marketplace/redhat-marketplace-pnwx6" Jan 28 09:57:38 crc kubenswrapper[4735]: I0128 09:57:38.551043 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dee427c-956c-419e-80c7-b52ec600dda8-utilities\") pod \"redhat-marketplace-pnwx6\" (UID: \"8dee427c-956c-419e-80c7-b52ec600dda8\") " pod="openshift-marketplace/redhat-marketplace-pnwx6" Jan 28 09:57:38 crc kubenswrapper[4735]: I0128 09:57:38.551146 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dee427c-956c-419e-80c7-b52ec600dda8-catalog-content\") pod \"redhat-marketplace-pnwx6\" (UID: \"8dee427c-956c-419e-80c7-b52ec600dda8\") " pod="openshift-marketplace/redhat-marketplace-pnwx6" Jan 28 09:57:38 crc kubenswrapper[4735]: I0128 09:57:38.568821 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8856\" (UniqueName: \"kubernetes.io/projected/8dee427c-956c-419e-80c7-b52ec600dda8-kube-api-access-f8856\") pod \"redhat-marketplace-pnwx6\" (UID: \"8dee427c-956c-419e-80c7-b52ec600dda8\") " pod="openshift-marketplace/redhat-marketplace-pnwx6" Jan 28 09:57:38 crc kubenswrapper[4735]: I0128 09:57:38.680793 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pnwx6" Jan 28 09:57:39 crc kubenswrapper[4735]: I0128 09:57:39.105229 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pnwx6"] Jan 28 09:57:39 crc kubenswrapper[4735]: W0128 09:57:39.112558 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dee427c_956c_419e_80c7_b52ec600dda8.slice/crio-406b1d9e2f3260284543ca5749b67759833c9f6e86997bad0cc9193c60a2db78 WatchSource:0}: Error finding container 406b1d9e2f3260284543ca5749b67759833c9f6e86997bad0cc9193c60a2db78: Status 404 returned error can't find the container with id 406b1d9e2f3260284543ca5749b67759833c9f6e86997bad0cc9193c60a2db78 Jan 28 09:57:39 crc kubenswrapper[4735]: I0128 09:57:39.838533 4735 generic.go:334] "Generic (PLEG): container finished" podID="8dee427c-956c-419e-80c7-b52ec600dda8" containerID="90c6aa07bd36732d843ccfad2e7d98700a32519ca1849c961fe4bec8be784178" exitCode=0 Jan 28 09:57:39 crc kubenswrapper[4735]: I0128 09:57:39.838668 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnwx6" event={"ID":"8dee427c-956c-419e-80c7-b52ec600dda8","Type":"ContainerDied","Data":"90c6aa07bd36732d843ccfad2e7d98700a32519ca1849c961fe4bec8be784178"} Jan 28 09:57:39 crc kubenswrapper[4735]: I0128 09:57:39.839069 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnwx6" event={"ID":"8dee427c-956c-419e-80c7-b52ec600dda8","Type":"ContainerStarted","Data":"406b1d9e2f3260284543ca5749b67759833c9f6e86997bad0cc9193c60a2db78"} Jan 28 09:57:40 crc kubenswrapper[4735]: I0128 09:57:40.847058 4735 generic.go:334] "Generic (PLEG): container finished" podID="8dee427c-956c-419e-80c7-b52ec600dda8" containerID="145d4b63c166aa1c37a64bf5e8deed48d6d5db873979e4ce4afaed9788186620" exitCode=0 Jan 28 09:57:40 crc kubenswrapper[4735]: I0128 09:57:40.847131 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnwx6" event={"ID":"8dee427c-956c-419e-80c7-b52ec600dda8","Type":"ContainerDied","Data":"145d4b63c166aa1c37a64bf5e8deed48d6d5db873979e4ce4afaed9788186620"} Jan 28 09:57:41 crc kubenswrapper[4735]: I0128 09:57:41.855343 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnwx6" event={"ID":"8dee427c-956c-419e-80c7-b52ec600dda8","Type":"ContainerStarted","Data":"28a33b9d9ab762faf6a593f6bd3c6dfb7489985d5db04b052f7623562db8273c"} Jan 28 09:57:41 crc kubenswrapper[4735]: I0128 09:57:41.876086 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pnwx6" podStartSLOduration=2.4725120990000002 podStartE2EDuration="3.876066902s" podCreationTimestamp="2026-01-28 09:57:38 +0000 UTC" firstStartedPulling="2026-01-28 09:57:39.840945097 +0000 UTC m=+912.990609470" lastFinishedPulling="2026-01-28 09:57:41.2444999 +0000 UTC m=+914.394164273" observedRunningTime="2026-01-28 09:57:41.872489992 +0000 UTC m=+915.022154385" watchObservedRunningTime="2026-01-28 09:57:41.876066902 +0000 UTC m=+915.025731275" Jan 28 09:57:42 crc kubenswrapper[4735]: I0128 09:57:42.821942 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-745854f465-rgjxp"] Jan 28 09:57:42 crc kubenswrapper[4735]: I0128 09:57:42.830339 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-745854f465-rgjxp" Jan 28 09:57:42 crc kubenswrapper[4735]: I0128 09:57:42.832382 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-d4sfd" Jan 28 09:57:42 crc kubenswrapper[4735]: I0128 09:57:42.847688 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-745854f465-rgjxp"] Jan 28 09:57:42 crc kubenswrapper[4735]: I0128 09:57:42.917390 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dngkk\" (UniqueName: \"kubernetes.io/projected/eb187ad2-0ae5-41e5-b649-4a55cdfb6aef-kube-api-access-dngkk\") pod \"openstack-operator-controller-init-745854f465-rgjxp\" (UID: \"eb187ad2-0ae5-41e5-b649-4a55cdfb6aef\") " pod="openstack-operators/openstack-operator-controller-init-745854f465-rgjxp" Jan 28 09:57:43 crc kubenswrapper[4735]: I0128 09:57:43.019181 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dngkk\" (UniqueName: \"kubernetes.io/projected/eb187ad2-0ae5-41e5-b649-4a55cdfb6aef-kube-api-access-dngkk\") pod \"openstack-operator-controller-init-745854f465-rgjxp\" (UID: \"eb187ad2-0ae5-41e5-b649-4a55cdfb6aef\") " pod="openstack-operators/openstack-operator-controller-init-745854f465-rgjxp" Jan 28 09:57:43 crc kubenswrapper[4735]: I0128 09:57:43.040692 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dngkk\" (UniqueName: \"kubernetes.io/projected/eb187ad2-0ae5-41e5-b649-4a55cdfb6aef-kube-api-access-dngkk\") pod \"openstack-operator-controller-init-745854f465-rgjxp\" (UID: \"eb187ad2-0ae5-41e5-b649-4a55cdfb6aef\") " pod="openstack-operators/openstack-operator-controller-init-745854f465-rgjxp" Jan 28 09:57:43 crc kubenswrapper[4735]: I0128 09:57:43.162363 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-745854f465-rgjxp" Jan 28 09:57:43 crc kubenswrapper[4735]: I0128 09:57:43.450261 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-745854f465-rgjxp"] Jan 28 09:57:43 crc kubenswrapper[4735]: I0128 09:57:43.872177 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-745854f465-rgjxp" event={"ID":"eb187ad2-0ae5-41e5-b649-4a55cdfb6aef","Type":"ContainerStarted","Data":"eebd5f99c5638e0173f49afc1520e1a11585f740b12c72ff3825595816401cb7"} Jan 28 09:57:47 crc kubenswrapper[4735]: I0128 09:57:47.925992 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-745854f465-rgjxp" event={"ID":"eb187ad2-0ae5-41e5-b649-4a55cdfb6aef","Type":"ContainerStarted","Data":"143d226cb22cc168703823d4d5a25db444bd10b3caa07e22c7d55f2b4e049702"} Jan 28 09:57:47 crc kubenswrapper[4735]: I0128 09:57:47.926633 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-745854f465-rgjxp" Jan 28 09:57:48 crc kubenswrapper[4735]: I0128 09:57:48.681162 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pnwx6" Jan 28 09:57:48 crc kubenswrapper[4735]: I0128 09:57:48.681548 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pnwx6" Jan 28 09:57:48 crc kubenswrapper[4735]: I0128 09:57:48.735082 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pnwx6" Jan 28 09:57:48 crc kubenswrapper[4735]: I0128 09:57:48.756768 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-745854f465-rgjxp" podStartSLOduration=3.088432141 podStartE2EDuration="6.756726081s" podCreationTimestamp="2026-01-28 09:57:42 +0000 UTC" firstStartedPulling="2026-01-28 09:57:43.459173794 +0000 UTC m=+916.608838177" lastFinishedPulling="2026-01-28 09:57:47.127467744 +0000 UTC m=+920.277132117" observedRunningTime="2026-01-28 09:57:47.959844416 +0000 UTC m=+921.109508799" watchObservedRunningTime="2026-01-28 09:57:48.756726081 +0000 UTC m=+921.906390464" Jan 28 09:57:48 crc kubenswrapper[4735]: I0128 09:57:48.974512 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pnwx6" Jan 28 09:57:50 crc kubenswrapper[4735]: I0128 09:57:50.448719 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pnwx6"] Jan 28 09:57:51 crc kubenswrapper[4735]: I0128 09:57:51.951810 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pnwx6" podUID="8dee427c-956c-419e-80c7-b52ec600dda8" containerName="registry-server" containerID="cri-o://28a33b9d9ab762faf6a593f6bd3c6dfb7489985d5db04b052f7623562db8273c" gracePeriod=2 Jan 28 09:57:52 crc kubenswrapper[4735]: I0128 09:57:52.315463 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pnwx6" Jan 28 09:57:52 crc kubenswrapper[4735]: I0128 09:57:52.398001 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dee427c-956c-419e-80c7-b52ec600dda8-utilities\") pod \"8dee427c-956c-419e-80c7-b52ec600dda8\" (UID: \"8dee427c-956c-419e-80c7-b52ec600dda8\") " Jan 28 09:57:52 crc kubenswrapper[4735]: I0128 09:57:52.398078 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dee427c-956c-419e-80c7-b52ec600dda8-catalog-content\") pod \"8dee427c-956c-419e-80c7-b52ec600dda8\" (UID: \"8dee427c-956c-419e-80c7-b52ec600dda8\") " Jan 28 09:57:52 crc kubenswrapper[4735]: I0128 09:57:52.398112 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8856\" (UniqueName: \"kubernetes.io/projected/8dee427c-956c-419e-80c7-b52ec600dda8-kube-api-access-f8856\") pod \"8dee427c-956c-419e-80c7-b52ec600dda8\" (UID: \"8dee427c-956c-419e-80c7-b52ec600dda8\") " Jan 28 09:57:52 crc kubenswrapper[4735]: I0128 09:57:52.399100 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dee427c-956c-419e-80c7-b52ec600dda8-utilities" (OuterVolumeSpecName: "utilities") pod "8dee427c-956c-419e-80c7-b52ec600dda8" (UID: "8dee427c-956c-419e-80c7-b52ec600dda8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:57:52 crc kubenswrapper[4735]: I0128 09:57:52.406696 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dee427c-956c-419e-80c7-b52ec600dda8-kube-api-access-f8856" (OuterVolumeSpecName: "kube-api-access-f8856") pod "8dee427c-956c-419e-80c7-b52ec600dda8" (UID: "8dee427c-956c-419e-80c7-b52ec600dda8"). InnerVolumeSpecName "kube-api-access-f8856". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:57:52 crc kubenswrapper[4735]: I0128 09:57:52.422026 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dee427c-956c-419e-80c7-b52ec600dda8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8dee427c-956c-419e-80c7-b52ec600dda8" (UID: "8dee427c-956c-419e-80c7-b52ec600dda8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:57:52 crc kubenswrapper[4735]: I0128 09:57:52.500305 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dee427c-956c-419e-80c7-b52ec600dda8-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 09:57:52 crc kubenswrapper[4735]: I0128 09:57:52.500346 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dee427c-956c-419e-80c7-b52ec600dda8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 09:57:52 crc kubenswrapper[4735]: I0128 09:57:52.500364 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8856\" (UniqueName: \"kubernetes.io/projected/8dee427c-956c-419e-80c7-b52ec600dda8-kube-api-access-f8856\") on node \"crc\" DevicePath \"\"" Jan 28 09:57:52 crc kubenswrapper[4735]: I0128 09:57:52.960914 4735 generic.go:334] "Generic (PLEG): container finished" podID="8dee427c-956c-419e-80c7-b52ec600dda8" containerID="28a33b9d9ab762faf6a593f6bd3c6dfb7489985d5db04b052f7623562db8273c" exitCode=0 Jan 28 09:57:52 crc kubenswrapper[4735]: I0128 09:57:52.960974 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnwx6" event={"ID":"8dee427c-956c-419e-80c7-b52ec600dda8","Type":"ContainerDied","Data":"28a33b9d9ab762faf6a593f6bd3c6dfb7489985d5db04b052f7623562db8273c"} Jan 28 09:57:52 crc kubenswrapper[4735]: I0128 09:57:52.961004 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pnwx6" Jan 28 09:57:52 crc kubenswrapper[4735]: I0128 09:57:52.961034 4735 scope.go:117] "RemoveContainer" containerID="28a33b9d9ab762faf6a593f6bd3c6dfb7489985d5db04b052f7623562db8273c" Jan 28 09:57:52 crc kubenswrapper[4735]: I0128 09:57:52.961015 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnwx6" event={"ID":"8dee427c-956c-419e-80c7-b52ec600dda8","Type":"ContainerDied","Data":"406b1d9e2f3260284543ca5749b67759833c9f6e86997bad0cc9193c60a2db78"} Jan 28 09:57:52 crc kubenswrapper[4735]: I0128 09:57:52.982137 4735 scope.go:117] "RemoveContainer" containerID="145d4b63c166aa1c37a64bf5e8deed48d6d5db873979e4ce4afaed9788186620" Jan 28 09:57:53 crc kubenswrapper[4735]: I0128 09:57:53.001894 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pnwx6"] Jan 28 09:57:53 crc kubenswrapper[4735]: I0128 09:57:53.003904 4735 scope.go:117] "RemoveContainer" containerID="90c6aa07bd36732d843ccfad2e7d98700a32519ca1849c961fe4bec8be784178" Jan 28 09:57:53 crc kubenswrapper[4735]: I0128 09:57:53.009563 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pnwx6"] Jan 28 09:57:53 crc kubenswrapper[4735]: I0128 09:57:53.032618 4735 scope.go:117] "RemoveContainer" containerID="28a33b9d9ab762faf6a593f6bd3c6dfb7489985d5db04b052f7623562db8273c" Jan 28 09:57:53 crc kubenswrapper[4735]: E0128 09:57:53.033175 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28a33b9d9ab762faf6a593f6bd3c6dfb7489985d5db04b052f7623562db8273c\": container with ID starting with 28a33b9d9ab762faf6a593f6bd3c6dfb7489985d5db04b052f7623562db8273c not found: ID does not exist" containerID="28a33b9d9ab762faf6a593f6bd3c6dfb7489985d5db04b052f7623562db8273c" Jan 28 09:57:53 crc kubenswrapper[4735]: I0128 09:57:53.033228 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28a33b9d9ab762faf6a593f6bd3c6dfb7489985d5db04b052f7623562db8273c"} err="failed to get container status \"28a33b9d9ab762faf6a593f6bd3c6dfb7489985d5db04b052f7623562db8273c\": rpc error: code = NotFound desc = could not find container \"28a33b9d9ab762faf6a593f6bd3c6dfb7489985d5db04b052f7623562db8273c\": container with ID starting with 28a33b9d9ab762faf6a593f6bd3c6dfb7489985d5db04b052f7623562db8273c not found: ID does not exist" Jan 28 09:57:53 crc kubenswrapper[4735]: I0128 09:57:53.033266 4735 scope.go:117] "RemoveContainer" containerID="145d4b63c166aa1c37a64bf5e8deed48d6d5db873979e4ce4afaed9788186620" Jan 28 09:57:53 crc kubenswrapper[4735]: E0128 09:57:53.033601 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"145d4b63c166aa1c37a64bf5e8deed48d6d5db873979e4ce4afaed9788186620\": container with ID starting with 145d4b63c166aa1c37a64bf5e8deed48d6d5db873979e4ce4afaed9788186620 not found: ID does not exist" containerID="145d4b63c166aa1c37a64bf5e8deed48d6d5db873979e4ce4afaed9788186620" Jan 28 09:57:53 crc kubenswrapper[4735]: I0128 09:57:53.033648 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"145d4b63c166aa1c37a64bf5e8deed48d6d5db873979e4ce4afaed9788186620"} err="failed to get container status \"145d4b63c166aa1c37a64bf5e8deed48d6d5db873979e4ce4afaed9788186620\": rpc error: code = NotFound desc = could not find container \"145d4b63c166aa1c37a64bf5e8deed48d6d5db873979e4ce4afaed9788186620\": container with ID starting with 145d4b63c166aa1c37a64bf5e8deed48d6d5db873979e4ce4afaed9788186620 not found: ID does not exist" Jan 28 09:57:53 crc kubenswrapper[4735]: I0128 09:57:53.033672 4735 scope.go:117] "RemoveContainer" containerID="90c6aa07bd36732d843ccfad2e7d98700a32519ca1849c961fe4bec8be784178" Jan 28 09:57:53 crc kubenswrapper[4735]: E0128 09:57:53.034179 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90c6aa07bd36732d843ccfad2e7d98700a32519ca1849c961fe4bec8be784178\": container with ID starting with 90c6aa07bd36732d843ccfad2e7d98700a32519ca1849c961fe4bec8be784178 not found: ID does not exist" containerID="90c6aa07bd36732d843ccfad2e7d98700a32519ca1849c961fe4bec8be784178" Jan 28 09:57:53 crc kubenswrapper[4735]: I0128 09:57:53.034223 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90c6aa07bd36732d843ccfad2e7d98700a32519ca1849c961fe4bec8be784178"} err="failed to get container status \"90c6aa07bd36732d843ccfad2e7d98700a32519ca1849c961fe4bec8be784178\": rpc error: code = NotFound desc = could not find container \"90c6aa07bd36732d843ccfad2e7d98700a32519ca1849c961fe4bec8be784178\": container with ID starting with 90c6aa07bd36732d843ccfad2e7d98700a32519ca1849c961fe4bec8be784178 not found: ID does not exist" Jan 28 09:57:53 crc kubenswrapper[4735]: I0128 09:57:53.166257 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-745854f465-rgjxp" Jan 28 09:57:53 crc kubenswrapper[4735]: I0128 09:57:53.511557 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dee427c-956c-419e-80c7-b52ec600dda8" path="/var/lib/kubelet/pods/8dee427c-956c-419e-80c7-b52ec600dda8/volumes" Jan 28 09:58:03 crc kubenswrapper[4735]: I0128 09:58:03.947408 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jp9k5"] Jan 28 09:58:03 crc kubenswrapper[4735]: E0128 09:58:03.948146 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dee427c-956c-419e-80c7-b52ec600dda8" containerName="extract-content" Jan 28 09:58:03 crc kubenswrapper[4735]: I0128 09:58:03.948158 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dee427c-956c-419e-80c7-b52ec600dda8" containerName="extract-content" Jan 28 09:58:03 crc kubenswrapper[4735]: E0128 09:58:03.948168 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dee427c-956c-419e-80c7-b52ec600dda8" containerName="registry-server" Jan 28 09:58:03 crc kubenswrapper[4735]: I0128 09:58:03.948174 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dee427c-956c-419e-80c7-b52ec600dda8" containerName="registry-server" Jan 28 09:58:03 crc kubenswrapper[4735]: E0128 09:58:03.948196 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dee427c-956c-419e-80c7-b52ec600dda8" containerName="extract-utilities" Jan 28 09:58:03 crc kubenswrapper[4735]: I0128 09:58:03.948202 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dee427c-956c-419e-80c7-b52ec600dda8" containerName="extract-utilities" Jan 28 09:58:03 crc kubenswrapper[4735]: I0128 09:58:03.948308 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dee427c-956c-419e-80c7-b52ec600dda8" containerName="registry-server" Jan 28 09:58:03 crc kubenswrapper[4735]: I0128 09:58:03.949059 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jp9k5" Jan 28 09:58:03 crc kubenswrapper[4735]: I0128 09:58:03.960323 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jp9k5"] Jan 28 09:58:04 crc kubenswrapper[4735]: I0128 09:58:04.056830 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-utilities\") pod \"community-operators-jp9k5\" (UID: \"e5d8fc87-b64d-48ec-95e2-20a73de5be8c\") " pod="openshift-marketplace/community-operators-jp9k5" Jan 28 09:58:04 crc kubenswrapper[4735]: I0128 09:58:04.056908 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-catalog-content\") pod \"community-operators-jp9k5\" (UID: \"e5d8fc87-b64d-48ec-95e2-20a73de5be8c\") " pod="openshift-marketplace/community-operators-jp9k5" Jan 28 09:58:04 crc kubenswrapper[4735]: I0128 09:58:04.057151 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dz24\" (UniqueName: \"kubernetes.io/projected/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-kube-api-access-7dz24\") pod \"community-operators-jp9k5\" (UID: \"e5d8fc87-b64d-48ec-95e2-20a73de5be8c\") " pod="openshift-marketplace/community-operators-jp9k5" Jan 28 09:58:04 crc kubenswrapper[4735]: I0128 09:58:04.158068 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-utilities\") pod \"community-operators-jp9k5\" (UID: \"e5d8fc87-b64d-48ec-95e2-20a73de5be8c\") " pod="openshift-marketplace/community-operators-jp9k5" Jan 28 09:58:04 crc kubenswrapper[4735]: I0128 09:58:04.158402 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-catalog-content\") pod \"community-operators-jp9k5\" (UID: \"e5d8fc87-b64d-48ec-95e2-20a73de5be8c\") " pod="openshift-marketplace/community-operators-jp9k5" Jan 28 09:58:04 crc kubenswrapper[4735]: I0128 09:58:04.158541 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dz24\" (UniqueName: \"kubernetes.io/projected/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-kube-api-access-7dz24\") pod \"community-operators-jp9k5\" (UID: \"e5d8fc87-b64d-48ec-95e2-20a73de5be8c\") " pod="openshift-marketplace/community-operators-jp9k5" Jan 28 09:58:04 crc kubenswrapper[4735]: I0128 09:58:04.158669 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-utilities\") pod \"community-operators-jp9k5\" (UID: \"e5d8fc87-b64d-48ec-95e2-20a73de5be8c\") " pod="openshift-marketplace/community-operators-jp9k5" Jan 28 09:58:04 crc kubenswrapper[4735]: I0128 09:58:04.159026 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-catalog-content\") pod \"community-operators-jp9k5\" (UID: \"e5d8fc87-b64d-48ec-95e2-20a73de5be8c\") " pod="openshift-marketplace/community-operators-jp9k5" Jan 28 09:58:04 crc kubenswrapper[4735]: I0128 09:58:04.177401 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dz24\" (UniqueName: \"kubernetes.io/projected/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-kube-api-access-7dz24\") pod \"community-operators-jp9k5\" (UID: \"e5d8fc87-b64d-48ec-95e2-20a73de5be8c\") " pod="openshift-marketplace/community-operators-jp9k5" Jan 28 09:58:04 crc kubenswrapper[4735]: I0128 09:58:04.266203 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jp9k5" Jan 28 09:58:04 crc kubenswrapper[4735]: I0128 09:58:04.789043 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jp9k5"] Jan 28 09:58:05 crc kubenswrapper[4735]: I0128 09:58:05.038353 4735 generic.go:334] "Generic (PLEG): container finished" podID="e5d8fc87-b64d-48ec-95e2-20a73de5be8c" containerID="a7a2db7dc67ff583dabd390c004095e0893b6a140ab7e7ccac783139bbdfd713" exitCode=0 Jan 28 09:58:05 crc kubenswrapper[4735]: I0128 09:58:05.038407 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jp9k5" event={"ID":"e5d8fc87-b64d-48ec-95e2-20a73de5be8c","Type":"ContainerDied","Data":"a7a2db7dc67ff583dabd390c004095e0893b6a140ab7e7ccac783139bbdfd713"} Jan 28 09:58:05 crc kubenswrapper[4735]: I0128 09:58:05.038691 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jp9k5" event={"ID":"e5d8fc87-b64d-48ec-95e2-20a73de5be8c","Type":"ContainerStarted","Data":"1a931080a14d2bf66fd1a6dd8e7490cec1983718e1a8e7a2b78af4c2c374e19e"} Jan 28 09:58:05 crc kubenswrapper[4735]: I0128 09:58:05.429496 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 09:58:05 crc kubenswrapper[4735]: I0128 09:58:05.429567 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 09:58:07 crc kubenswrapper[4735]: I0128 09:58:07.050423 4735 generic.go:334] "Generic (PLEG): container finished" podID="e5d8fc87-b64d-48ec-95e2-20a73de5be8c" containerID="566884dfc0125e70b8bda9be3bb7a49200c85fa3a94fe30ca634947ba8b402f5" exitCode=0 Jan 28 09:58:07 crc kubenswrapper[4735]: I0128 09:58:07.050475 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jp9k5" event={"ID":"e5d8fc87-b64d-48ec-95e2-20a73de5be8c","Type":"ContainerDied","Data":"566884dfc0125e70b8bda9be3bb7a49200c85fa3a94fe30ca634947ba8b402f5"} Jan 28 09:58:08 crc kubenswrapper[4735]: I0128 09:58:08.058610 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jp9k5" event={"ID":"e5d8fc87-b64d-48ec-95e2-20a73de5be8c","Type":"ContainerStarted","Data":"26f83ad8cf352e831e2142c036b99f83f05948bd7928813ee77b2fd3ae72215d"} Jan 28 09:58:08 crc kubenswrapper[4735]: I0128 09:58:08.072276 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jp9k5" podStartSLOduration=2.52259865 podStartE2EDuration="5.07225802s" podCreationTimestamp="2026-01-28 09:58:03 +0000 UTC" firstStartedPulling="2026-01-28 09:58:05.039750137 +0000 UTC m=+938.189414520" lastFinishedPulling="2026-01-28 09:58:07.589409517 +0000 UTC m=+940.739073890" observedRunningTime="2026-01-28 09:58:08.071603564 +0000 UTC m=+941.221267937" watchObservedRunningTime="2026-01-28 09:58:08.07225802 +0000 UTC m=+941.221922393" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.641736 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-vxtqg"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.643006 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-vxtqg" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.645477 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-ljxnk" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.661428 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-vxtqg"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.671075 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pxbwq"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.672352 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pxbwq" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.678414 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-p2sxg" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.688054 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-vlxzd"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.688895 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vlxzd" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.691474 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-x8nnw" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.714676 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pxbwq"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.719763 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-5qn45"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.720732 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-5qn45" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.725098 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-sgbcl" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.733210 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-vlxzd"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.739796 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-77khw"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.740904 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-77khw" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.742587 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-5qwvq" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.747820 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-5qn45"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.756864 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zwpzq"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.757655 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zwpzq" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.763895 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-77khw"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.764550 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx6z5\" (UniqueName: \"kubernetes.io/projected/60a1f78e-f6c9-4335-8545-533014df520c-kube-api-access-mx6z5\") pod \"barbican-operator-controller-manager-65ff799cfd-vxtqg\" (UID: \"60a1f78e-f6c9-4335-8545-533014df520c\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-vxtqg" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.765449 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-mfgpm" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.780292 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zwpzq"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.843197 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.857579 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.861285 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.861568 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9n825" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.866299 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx6z5\" (UniqueName: \"kubernetes.io/projected/60a1f78e-f6c9-4335-8545-533014df520c-kube-api-access-mx6z5\") pod \"barbican-operator-controller-manager-65ff799cfd-vxtqg\" (UID: \"60a1f78e-f6c9-4335-8545-533014df520c\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-vxtqg" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.866354 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm9fv\" (UniqueName: \"kubernetes.io/projected/ab02d2a5-6cf8-4f01-8381-2ea58829a846-kube-api-access-bm9fv\") pod \"heat-operator-controller-manager-575ffb885b-77khw\" (UID: \"ab02d2a5-6cf8-4f01-8381-2ea58829a846\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-77khw" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.866410 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzhtq\" (UniqueName: \"kubernetes.io/projected/ad40699a-df4d-4aef-ba9e-ec411784138f-kube-api-access-fzhtq\") pod \"designate-operator-controller-manager-77554cdc5c-vlxzd\" (UID: \"ad40699a-df4d-4aef-ba9e-ec411784138f\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vlxzd" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.866432 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgff7\" (UniqueName: \"kubernetes.io/projected/23c42560-8419-496c-8b21-ebb1315a7e4d-kube-api-access-mgff7\") pod \"glance-operator-controller-manager-67dd55ff59-5qn45\" (UID: \"23c42560-8419-496c-8b21-ebb1315a7e4d\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-5qn45" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.866471 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6ksk\" (UniqueName: \"kubernetes.io/projected/9ada3a05-5552-4191-ac59-bbeeb2621780-kube-api-access-c6ksk\") pod \"horizon-operator-controller-manager-77d5c5b54f-zwpzq\" (UID: \"9ada3a05-5552-4191-ac59-bbeeb2621780\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zwpzq" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.866494 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5jbb\" (UniqueName: \"kubernetes.io/projected/4eee1f38-6076-431a-83c4-f6ac2c91bf2b-kube-api-access-v5jbb\") pod \"cinder-operator-controller-manager-655bf9cfbb-pxbwq\" (UID: \"4eee1f38-6076-431a-83c4-f6ac2c91bf2b\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pxbwq" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.891582 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.904590 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-gtbjm"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.906668 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx6z5\" (UniqueName: \"kubernetes.io/projected/60a1f78e-f6c9-4335-8545-533014df520c-kube-api-access-mx6z5\") pod \"barbican-operator-controller-manager-65ff799cfd-vxtqg\" (UID: \"60a1f78e-f6c9-4335-8545-533014df520c\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-vxtqg" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.906889 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-gtbjm" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.909701 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-98lhh" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.923501 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-gtbjm"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.951732 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-b2ths"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.952559 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-b2ths" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.963044 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-kf8mp" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.967165 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm9fv\" (UniqueName: \"kubernetes.io/projected/ab02d2a5-6cf8-4f01-8381-2ea58829a846-kube-api-access-bm9fv\") pod \"heat-operator-controller-manager-575ffb885b-77khw\" (UID: \"ab02d2a5-6cf8-4f01-8381-2ea58829a846\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-77khw" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.967220 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzhtq\" (UniqueName: \"kubernetes.io/projected/ad40699a-df4d-4aef-ba9e-ec411784138f-kube-api-access-fzhtq\") pod \"designate-operator-controller-manager-77554cdc5c-vlxzd\" (UID: \"ad40699a-df4d-4aef-ba9e-ec411784138f\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vlxzd" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.967248 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgff7\" (UniqueName: \"kubernetes.io/projected/23c42560-8419-496c-8b21-ebb1315a7e4d-kube-api-access-mgff7\") pod \"glance-operator-controller-manager-67dd55ff59-5qn45\" (UID: \"23c42560-8419-496c-8b21-ebb1315a7e4d\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-5qn45" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.967287 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6ksk\" (UniqueName: \"kubernetes.io/projected/9ada3a05-5552-4191-ac59-bbeeb2621780-kube-api-access-c6ksk\") pod \"horizon-operator-controller-manager-77d5c5b54f-zwpzq\" (UID: \"9ada3a05-5552-4191-ac59-bbeeb2621780\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zwpzq" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.967311 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5jbb\" (UniqueName: \"kubernetes.io/projected/4eee1f38-6076-431a-83c4-f6ac2c91bf2b-kube-api-access-v5jbb\") pod \"cinder-operator-controller-manager-655bf9cfbb-pxbwq\" (UID: \"4eee1f38-6076-431a-83c4-f6ac2c91bf2b\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pxbwq" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.967336 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m485g\" (UniqueName: \"kubernetes.io/projected/8139f6ed-d10a-4966-a7d2-a5a9412bd235-kube-api-access-m485g\") pod \"infra-operator-controller-manager-7d75bc88d5-mz4t7\" (UID: \"8139f6ed-d10a-4966-a7d2-a5a9412bd235\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.967360 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-mz4t7\" (UID: \"8139f6ed-d10a-4966-a7d2-a5a9412bd235\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.968257 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-lt6sx"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.969069 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-lt6sx" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.972468 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-vxtqg" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.977363 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-4ctgs" Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.981395 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-b2ths"] Jan 28 09:58:11 crc kubenswrapper[4735]: I0128 09:58:11.998401 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm9fv\" (UniqueName: \"kubernetes.io/projected/ab02d2a5-6cf8-4f01-8381-2ea58829a846-kube-api-access-bm9fv\") pod \"heat-operator-controller-manager-575ffb885b-77khw\" (UID: \"ab02d2a5-6cf8-4f01-8381-2ea58829a846\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-77khw" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.000833 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-lt6sx"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.001858 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgff7\" (UniqueName: \"kubernetes.io/projected/23c42560-8419-496c-8b21-ebb1315a7e4d-kube-api-access-mgff7\") pod \"glance-operator-controller-manager-67dd55ff59-5qn45\" (UID: \"23c42560-8419-496c-8b21-ebb1315a7e4d\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-5qn45" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.002475 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6ksk\" (UniqueName: \"kubernetes.io/projected/9ada3a05-5552-4191-ac59-bbeeb2621780-kube-api-access-c6ksk\") pod \"horizon-operator-controller-manager-77d5c5b54f-zwpzq\" (UID: \"9ada3a05-5552-4191-ac59-bbeeb2621780\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zwpzq" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.007728 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5jbb\" (UniqueName: \"kubernetes.io/projected/4eee1f38-6076-431a-83c4-f6ac2c91bf2b-kube-api-access-v5jbb\") pod \"cinder-operator-controller-manager-655bf9cfbb-pxbwq\" (UID: \"4eee1f38-6076-431a-83c4-f6ac2c91bf2b\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pxbwq" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.009872 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.011088 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.014486 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzhtq\" (UniqueName: \"kubernetes.io/projected/ad40699a-df4d-4aef-ba9e-ec411784138f-kube-api-access-fzhtq\") pod \"designate-operator-controller-manager-77554cdc5c-vlxzd\" (UID: \"ad40699a-df4d-4aef-ba9e-ec411784138f\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vlxzd" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.014656 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xds2t"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.014917 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-ctmfk" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.016134 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xds2t" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.018053 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xjzjh" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.020027 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.027497 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-fp94q"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.034275 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-fp94q" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.035442 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vlxzd" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.038832 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xds2t"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.039087 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-9rb9c" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.052194 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-fp94q"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.066629 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-94dd99d7d-6pbvz"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.067601 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-94dd99d7d-6pbvz" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.068142 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-5qn45" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.069526 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rldxk\" (UniqueName: \"kubernetes.io/projected/dbfb38f5-9834-44f5-a84d-f17bb28538f6-kube-api-access-rldxk\") pod \"manila-operator-controller-manager-849fcfbb6b-lt6sx\" (UID: \"dbfb38f5-9834-44f5-a84d-f17bb28538f6\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-lt6sx" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.069616 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m485g\" (UniqueName: \"kubernetes.io/projected/8139f6ed-d10a-4966-a7d2-a5a9412bd235-kube-api-access-m485g\") pod \"infra-operator-controller-manager-7d75bc88d5-mz4t7\" (UID: \"8139f6ed-d10a-4966-a7d2-a5a9412bd235\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.069658 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-mz4t7\" (UID: \"8139f6ed-d10a-4966-a7d2-a5a9412bd235\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.069702 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrs9z\" (UniqueName: \"kubernetes.io/projected/fa2276dc-647b-427e-9ff5-c25f1d351ee6-kube-api-access-mrs9z\") pod \"keystone-operator-controller-manager-55f684fd56-b2ths\" (UID: \"fa2276dc-647b-427e-9ff5-c25f1d351ee6\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-b2ths" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.069774 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4km2v\" (UniqueName: \"kubernetes.io/projected/75e33671-8cc3-4470-845b-b45c777c3e9b-kube-api-access-4km2v\") pod \"ironic-operator-controller-manager-768b776ffb-gtbjm\" (UID: \"75e33671-8cc3-4470-845b-b45c777c3e9b\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-gtbjm" Jan 28 09:58:12 crc kubenswrapper[4735]: E0128 09:58:12.069964 4735 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 09:58:12 crc kubenswrapper[4735]: E0128 09:58:12.070029 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert podName:8139f6ed-d10a-4966-a7d2-a5a9412bd235 nodeName:}" failed. No retries permitted until 2026-01-28 09:58:12.570006656 +0000 UTC m=+945.719671029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert") pod "infra-operator-controller-manager-7d75bc88d5-mz4t7" (UID: "8139f6ed-d10a-4966-a7d2-a5a9412bd235") : secret "infra-operator-webhook-server-cert" not found Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.075292 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-94dd99d7d-6pbvz"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.082209 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-lgl2d" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.084652 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-xvd9h"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.085666 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-xvd9h" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.087389 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4jdfg" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.094545 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.095346 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.096639 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m485g\" (UniqueName: \"kubernetes.io/projected/8139f6ed-d10a-4966-a7d2-a5a9412bd235-kube-api-access-m485g\") pod \"infra-operator-controller-manager-7d75bc88d5-mz4t7\" (UID: \"8139f6ed-d10a-4966-a7d2-a5a9412bd235\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.097236 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-5zkm2" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.097429 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.099345 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-77khw" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.103977 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-xvd9h"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.126546 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-8svwm"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.127783 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8svwm" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.128303 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zwpzq" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.129140 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9hdng" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.141523 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-2hkvv"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.145457 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-2hkvv" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.149220 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-b8nfn" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.171738 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-8svwm"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.172650 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxs85\" (UniqueName: \"kubernetes.io/projected/d3cf4444-7ac3-4c53-9b08-f2ad2d5a7c17-kube-api-access-lxs85\") pod \"nova-operator-controller-manager-ddcbfd695-fp94q\" (UID: \"d3cf4444-7ac3-4c53-9b08-f2ad2d5a7c17\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-fp94q" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.173062 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rldxk\" (UniqueName: \"kubernetes.io/projected/dbfb38f5-9834-44f5-a84d-f17bb28538f6-kube-api-access-rldxk\") pod \"manila-operator-controller-manager-849fcfbb6b-lt6sx\" (UID: \"dbfb38f5-9834-44f5-a84d-f17bb28538f6\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-lt6sx" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.173220 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkclg\" (UniqueName: \"kubernetes.io/projected/99934609-5407-442a-a887-ba47b516506a-kube-api-access-rkclg\") pod \"neutron-operator-controller-manager-7ffd8d76d4-xds2t\" (UID: \"99934609-5407-442a-a887-ba47b516506a\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xds2t" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.173344 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwrc2\" (UniqueName: \"kubernetes.io/projected/6300756c-e96b-40d5-ab3d-a044cb39016b-kube-api-access-mwrc2\") pod \"octavia-operator-controller-manager-94dd99d7d-6pbvz\" (UID: \"6300756c-e96b-40d5-ab3d-a044cb39016b\") " pod="openstack-operators/octavia-operator-controller-manager-94dd99d7d-6pbvz" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.174958 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrs9z\" (UniqueName: \"kubernetes.io/projected/fa2276dc-647b-427e-9ff5-c25f1d351ee6-kube-api-access-mrs9z\") pod \"keystone-operator-controller-manager-55f684fd56-b2ths\" (UID: \"fa2276dc-647b-427e-9ff5-c25f1d351ee6\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-b2ths" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.175050 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4km2v\" (UniqueName: \"kubernetes.io/projected/75e33671-8cc3-4470-845b-b45c777c3e9b-kube-api-access-4km2v\") pod \"ironic-operator-controller-manager-768b776ffb-gtbjm\" (UID: \"75e33671-8cc3-4470-845b-b45c777c3e9b\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-gtbjm" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.175106 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8ntp\" (UniqueName: \"kubernetes.io/projected/7ee4898b-8413-497c-b0f8-6003e75d1bd4-kube-api-access-t8ntp\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr\" (UID: \"7ee4898b-8413-497c-b0f8-6003e75d1bd4\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.193302 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4km2v\" (UniqueName: \"kubernetes.io/projected/75e33671-8cc3-4470-845b-b45c777c3e9b-kube-api-access-4km2v\") pod \"ironic-operator-controller-manager-768b776ffb-gtbjm\" (UID: \"75e33671-8cc3-4470-845b-b45c777c3e9b\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-gtbjm" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.194790 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rldxk\" (UniqueName: \"kubernetes.io/projected/dbfb38f5-9834-44f5-a84d-f17bb28538f6-kube-api-access-rldxk\") pod \"manila-operator-controller-manager-849fcfbb6b-lt6sx\" (UID: \"dbfb38f5-9834-44f5-a84d-f17bb28538f6\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-lt6sx" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.196777 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.197023 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrs9z\" (UniqueName: \"kubernetes.io/projected/fa2276dc-647b-427e-9ff5-c25f1d351ee6-kube-api-access-mrs9z\") pod \"keystone-operator-controller-manager-55f684fd56-b2ths\" (UID: \"fa2276dc-647b-427e-9ff5-c25f1d351ee6\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-b2ths" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.238043 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-2hkvv"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.245259 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-gtbjm" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.270385 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-b2ths" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.276113 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8ntp\" (UniqueName: \"kubernetes.io/projected/7ee4898b-8413-497c-b0f8-6003e75d1bd4-kube-api-access-t8ntp\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr\" (UID: \"7ee4898b-8413-497c-b0f8-6003e75d1bd4\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.276166 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms9zs\" (UniqueName: \"kubernetes.io/projected/24c4a2af-7bb3-4a12-8279-58f7dda0ef7e-kube-api-access-ms9zs\") pod \"swift-operator-controller-manager-547cbdb99f-2hkvv\" (UID: \"24c4a2af-7bb3-4a12-8279-58f7dda0ef7e\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-2hkvv" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.276211 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxs85\" (UniqueName: \"kubernetes.io/projected/d3cf4444-7ac3-4c53-9b08-f2ad2d5a7c17-kube-api-access-lxs85\") pod \"nova-operator-controller-manager-ddcbfd695-fp94q\" (UID: \"d3cf4444-7ac3-4c53-9b08-f2ad2d5a7c17\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-fp94q" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.276242 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp\" (UID: \"49ce66b9-31db-4d8d-adc4-f7e178cdceec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.276275 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ht92\" (UniqueName: \"kubernetes.io/projected/49ce66b9-31db-4d8d-adc4-f7e178cdceec-kube-api-access-9ht92\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp\" (UID: \"49ce66b9-31db-4d8d-adc4-f7e178cdceec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.276319 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q9ng\" (UniqueName: \"kubernetes.io/projected/b91503a2-0819-4923-a8c2-63f35bded503-kube-api-access-5q9ng\") pod \"placement-operator-controller-manager-79d5ccc684-8svwm\" (UID: \"b91503a2-0819-4923-a8c2-63f35bded503\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8svwm" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.276353 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkclg\" (UniqueName: \"kubernetes.io/projected/99934609-5407-442a-a887-ba47b516506a-kube-api-access-rkclg\") pod \"neutron-operator-controller-manager-7ffd8d76d4-xds2t\" (UID: \"99934609-5407-442a-a887-ba47b516506a\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xds2t" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.276388 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwrc2\" (UniqueName: \"kubernetes.io/projected/6300756c-e96b-40d5-ab3d-a044cb39016b-kube-api-access-mwrc2\") pod \"octavia-operator-controller-manager-94dd99d7d-6pbvz\" (UID: \"6300756c-e96b-40d5-ab3d-a044cb39016b\") " pod="openstack-operators/octavia-operator-controller-manager-94dd99d7d-6pbvz" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.276413 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfvxl\" (UniqueName: \"kubernetes.io/projected/c0301ee4-c19c-425e-9ba4-f66e2b405835-kube-api-access-sfvxl\") pod \"ovn-operator-controller-manager-6f75f45d54-xvd9h\" (UID: \"c0301ee4-c19c-425e-9ba4-f66e2b405835\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-xvd9h" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.290365 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-lt6sx" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.293459 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pxbwq" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.294432 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8ntp\" (UniqueName: \"kubernetes.io/projected/7ee4898b-8413-497c-b0f8-6003e75d1bd4-kube-api-access-t8ntp\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr\" (UID: \"7ee4898b-8413-497c-b0f8-6003e75d1bd4\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.295500 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwrc2\" (UniqueName: \"kubernetes.io/projected/6300756c-e96b-40d5-ab3d-a044cb39016b-kube-api-access-mwrc2\") pod \"octavia-operator-controller-manager-94dd99d7d-6pbvz\" (UID: \"6300756c-e96b-40d5-ab3d-a044cb39016b\") " pod="openstack-operators/octavia-operator-controller-manager-94dd99d7d-6pbvz" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.298341 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkclg\" (UniqueName: \"kubernetes.io/projected/99934609-5407-442a-a887-ba47b516506a-kube-api-access-rkclg\") pod \"neutron-operator-controller-manager-7ffd8d76d4-xds2t\" (UID: \"99934609-5407-442a-a887-ba47b516506a\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xds2t" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.303621 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxs85\" (UniqueName: \"kubernetes.io/projected/d3cf4444-7ac3-4c53-9b08-f2ad2d5a7c17-kube-api-access-lxs85\") pod \"nova-operator-controller-manager-ddcbfd695-fp94q\" (UID: \"d3cf4444-7ac3-4c53-9b08-f2ad2d5a7c17\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-fp94q" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.304271 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-bcnkf"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.305174 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bcnkf" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.309400 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-l5kv4" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.389009 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-bcnkf"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.400904 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms9zs\" (UniqueName: \"kubernetes.io/projected/24c4a2af-7bb3-4a12-8279-58f7dda0ef7e-kube-api-access-ms9zs\") pod \"swift-operator-controller-manager-547cbdb99f-2hkvv\" (UID: \"24c4a2af-7bb3-4a12-8279-58f7dda0ef7e\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-2hkvv" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.401008 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp\" (UID: \"49ce66b9-31db-4d8d-adc4-f7e178cdceec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.401165 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ht92\" (UniqueName: \"kubernetes.io/projected/49ce66b9-31db-4d8d-adc4-f7e178cdceec-kube-api-access-9ht92\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp\" (UID: \"49ce66b9-31db-4d8d-adc4-f7e178cdceec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" Jan 28 09:58:12 crc kubenswrapper[4735]: E0128 09:58:12.403339 4735 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 09:58:12 crc kubenswrapper[4735]: E0128 09:58:12.403579 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert podName:49ce66b9-31db-4d8d-adc4-f7e178cdceec nodeName:}" failed. No retries permitted until 2026-01-28 09:58:12.903556517 +0000 UTC m=+946.053220890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" (UID: "49ce66b9-31db-4d8d-adc4-f7e178cdceec") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.401245 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q9ng\" (UniqueName: \"kubernetes.io/projected/b91503a2-0819-4923-a8c2-63f35bded503-kube-api-access-5q9ng\") pod \"placement-operator-controller-manager-79d5ccc684-8svwm\" (UID: \"b91503a2-0819-4923-a8c2-63f35bded503\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8svwm" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.407566 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6tk2\" (UniqueName: \"kubernetes.io/projected/351faed5-c166-4766-99af-3c5b728b1f78-kube-api-access-h6tk2\") pod \"telemetry-operator-controller-manager-799bc87c89-bcnkf\" (UID: \"351faed5-c166-4766-99af-3c5b728b1f78\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bcnkf" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.407653 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfvxl\" (UniqueName: \"kubernetes.io/projected/c0301ee4-c19c-425e-9ba4-f66e2b405835-kube-api-access-sfvxl\") pod \"ovn-operator-controller-manager-6f75f45d54-xvd9h\" (UID: \"c0301ee4-c19c-425e-9ba4-f66e2b405835\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-xvd9h" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.424657 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.447029 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xds2t" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.447077 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfvxl\" (UniqueName: \"kubernetes.io/projected/c0301ee4-c19c-425e-9ba4-f66e2b405835-kube-api-access-sfvxl\") pod \"ovn-operator-controller-manager-6f75f45d54-xvd9h\" (UID: \"c0301ee4-c19c-425e-9ba4-f66e2b405835\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-xvd9h" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.453240 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q9ng\" (UniqueName: \"kubernetes.io/projected/b91503a2-0819-4923-a8c2-63f35bded503-kube-api-access-5q9ng\") pod \"placement-operator-controller-manager-79d5ccc684-8svwm\" (UID: \"b91503a2-0819-4923-a8c2-63f35bded503\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8svwm" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.461866 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms9zs\" (UniqueName: \"kubernetes.io/projected/24c4a2af-7bb3-4a12-8279-58f7dda0ef7e-kube-api-access-ms9zs\") pod \"swift-operator-controller-manager-547cbdb99f-2hkvv\" (UID: \"24c4a2af-7bb3-4a12-8279-58f7dda0ef7e\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-2hkvv" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.490776 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-fp94q" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.502218 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ht92\" (UniqueName: \"kubernetes.io/projected/49ce66b9-31db-4d8d-adc4-f7e178cdceec-kube-api-access-9ht92\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp\" (UID: \"49ce66b9-31db-4d8d-adc4-f7e178cdceec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.508820 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6tk2\" (UniqueName: \"kubernetes.io/projected/351faed5-c166-4766-99af-3c5b728b1f78-kube-api-access-h6tk2\") pod \"telemetry-operator-controller-manager-799bc87c89-bcnkf\" (UID: \"351faed5-c166-4766-99af-3c5b728b1f78\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bcnkf" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.527919 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6tk2\" (UniqueName: \"kubernetes.io/projected/351faed5-c166-4766-99af-3c5b728b1f78-kube-api-access-h6tk2\") pod \"telemetry-operator-controller-manager-799bc87c89-bcnkf\" (UID: \"351faed5-c166-4766-99af-3c5b728b1f78\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bcnkf" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.529964 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-pcp57"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.531502 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pcp57" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.533608 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fp4b5" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.537794 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-pcp57"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.551596 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-94dd99d7d-6pbvz" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.555797 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-cmtww"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.556888 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-cmtww" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.560691 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-79wmr" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.571100 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-xvd9h" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.576252 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-cmtww"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.582568 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.593509 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.597703 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lvpdx" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.597863 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.597887 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.600640 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.609104 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8svwm" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.613677 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.613803 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjkmp\" (UniqueName: \"kubernetes.io/projected/733ac7cb-682a-4a00-b6eb-404b3010295c-kube-api-access-mjkmp\") pod \"watcher-operator-controller-manager-767b8bc766-cmtww\" (UID: \"733ac7cb-682a-4a00-b6eb-404b3010295c\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-cmtww" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.613834 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.613875 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdgff\" (UniqueName: \"kubernetes.io/projected/4699fc29-624c-482c-9e68-e71bdf957701-kube-api-access-fdgff\") pod \"test-operator-controller-manager-69797bbcbd-pcp57\" (UID: \"4699fc29-624c-482c-9e68-e71bdf957701\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pcp57" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.613945 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjst6\" (UniqueName: \"kubernetes.io/projected/77910ebb-c49c-432e-b224-4fbafcaf7853-kube-api-access-qjst6\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.614017 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-mz4t7\" (UID: \"8139f6ed-d10a-4966-a7d2-a5a9412bd235\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" Jan 28 09:58:12 crc kubenswrapper[4735]: E0128 09:58:12.615409 4735 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 09:58:12 crc kubenswrapper[4735]: E0128 09:58:12.615464 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert podName:8139f6ed-d10a-4966-a7d2-a5a9412bd235 nodeName:}" failed. No retries permitted until 2026-01-28 09:58:13.615445484 +0000 UTC m=+946.765109937 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert") pod "infra-operator-controller-manager-7d75bc88d5-mz4t7" (UID: "8139f6ed-d10a-4966-a7d2-a5a9412bd235") : secret "infra-operator-webhook-server-cert" not found Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.654685 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-vxtqg"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.662578 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jbc6s"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.663651 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jbc6s" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.672602 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-t28r4" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.699274 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jbc6s"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.709977 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-vlxzd"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.715336 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjst6\" (UniqueName: \"kubernetes.io/projected/77910ebb-c49c-432e-b224-4fbafcaf7853-kube-api-access-qjst6\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.715451 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.715489 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjkmp\" (UniqueName: \"kubernetes.io/projected/733ac7cb-682a-4a00-b6eb-404b3010295c-kube-api-access-mjkmp\") pod \"watcher-operator-controller-manager-767b8bc766-cmtww\" (UID: \"733ac7cb-682a-4a00-b6eb-404b3010295c\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-cmtww" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.715505 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.715530 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h45xl\" (UniqueName: \"kubernetes.io/projected/d401e4f7-afe6-4095-86cc-f43f43130c4b-kube-api-access-h45xl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jbc6s\" (UID: \"d401e4f7-afe6-4095-86cc-f43f43130c4b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jbc6s" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.715552 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdgff\" (UniqueName: \"kubernetes.io/projected/4699fc29-624c-482c-9e68-e71bdf957701-kube-api-access-fdgff\") pod \"test-operator-controller-manager-69797bbcbd-pcp57\" (UID: \"4699fc29-624c-482c-9e68-e71bdf957701\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pcp57" Jan 28 09:58:12 crc kubenswrapper[4735]: E0128 09:58:12.716023 4735 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 09:58:12 crc kubenswrapper[4735]: E0128 09:58:12.716065 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs podName:77910ebb-c49c-432e-b224-4fbafcaf7853 nodeName:}" failed. No retries permitted until 2026-01-28 09:58:13.21605111 +0000 UTC m=+946.365715473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs") pod "openstack-operator-controller-manager-7f5f655bfc-9jbp7" (UID: "77910ebb-c49c-432e-b224-4fbafcaf7853") : secret "webhook-server-cert" not found Jan 28 09:58:12 crc kubenswrapper[4735]: E0128 09:58:12.716329 4735 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 09:58:12 crc kubenswrapper[4735]: E0128 09:58:12.716354 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs podName:77910ebb-c49c-432e-b224-4fbafcaf7853 nodeName:}" failed. No retries permitted until 2026-01-28 09:58:13.216346837 +0000 UTC m=+946.366011210 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs") pod "openstack-operator-controller-manager-7f5f655bfc-9jbp7" (UID: "77910ebb-c49c-432e-b224-4fbafcaf7853") : secret "metrics-server-cert" not found Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.741932 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjkmp\" (UniqueName: \"kubernetes.io/projected/733ac7cb-682a-4a00-b6eb-404b3010295c-kube-api-access-mjkmp\") pod \"watcher-operator-controller-manager-767b8bc766-cmtww\" (UID: \"733ac7cb-682a-4a00-b6eb-404b3010295c\") " pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-cmtww" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.743507 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjst6\" (UniqueName: \"kubernetes.io/projected/77910ebb-c49c-432e-b224-4fbafcaf7853-kube-api-access-qjst6\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.747527 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdgff\" (UniqueName: \"kubernetes.io/projected/4699fc29-624c-482c-9e68-e71bdf957701-kube-api-access-fdgff\") pod \"test-operator-controller-manager-69797bbcbd-pcp57\" (UID: \"4699fc29-624c-482c-9e68-e71bdf957701\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pcp57" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.752633 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-2hkvv" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.753138 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-77khw"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.801622 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bcnkf" Jan 28 09:58:12 crc kubenswrapper[4735]: W0128 09:58:12.809877 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab02d2a5_6cf8_4f01_8381_2ea58829a846.slice/crio-fb058e791db89d003329dc43b86444c3b9faa6340969d311f03fee388b7fa9d3 WatchSource:0}: Error finding container fb058e791db89d003329dc43b86444c3b9faa6340969d311f03fee388b7fa9d3: Status 404 returned error can't find the container with id fb058e791db89d003329dc43b86444c3b9faa6340969d311f03fee388b7fa9d3 Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.816421 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h45xl\" (UniqueName: \"kubernetes.io/projected/d401e4f7-afe6-4095-86cc-f43f43130c4b-kube-api-access-h45xl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jbc6s\" (UID: \"d401e4f7-afe6-4095-86cc-f43f43130c4b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jbc6s" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.823141 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zwpzq"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.834965 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-5qn45"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.840712 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h45xl\" (UniqueName: \"kubernetes.io/projected/d401e4f7-afe6-4095-86cc-f43f43130c4b-kube-api-access-h45xl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jbc6s\" (UID: \"d401e4f7-afe6-4095-86cc-f43f43130c4b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jbc6s" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.891848 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pcp57" Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.919712 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp\" (UID: \"49ce66b9-31db-4d8d-adc4-f7e178cdceec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" Jan 28 09:58:12 crc kubenswrapper[4735]: E0128 09:58:12.920009 4735 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 09:58:12 crc kubenswrapper[4735]: E0128 09:58:12.920066 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert podName:49ce66b9-31db-4d8d-adc4-f7e178cdceec nodeName:}" failed. No retries permitted until 2026-01-28 09:58:13.920046451 +0000 UTC m=+947.069710824 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" (UID: "49ce66b9-31db-4d8d-adc4-f7e178cdceec") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.926776 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-gtbjm"] Jan 28 09:58:12 crc kubenswrapper[4735]: I0128 09:58:12.938199 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-cmtww" Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.053494 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jbc6s" Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.070102 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pxbwq"] Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.209538 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zwpzq" event={"ID":"9ada3a05-5552-4191-ac59-bbeeb2621780","Type":"ContainerStarted","Data":"042d6cba4f86e21928817af4d301247073ed7485668a2feee6e44467d245463a"} Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.222000 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-gtbjm" event={"ID":"75e33671-8cc3-4470-845b-b45c777c3e9b","Type":"ContainerStarted","Data":"577227bfddd887023eeaee01d7ecad80c6cf6d9ffc5d97efe5c7104cfe1dce4a"} Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.223275 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-77khw" event={"ID":"ab02d2a5-6cf8-4f01-8381-2ea58829a846","Type":"ContainerStarted","Data":"fb058e791db89d003329dc43b86444c3b9faa6340969d311f03fee388b7fa9d3"} Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.226197 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.226297 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:13 crc kubenswrapper[4735]: E0128 09:58:13.226438 4735 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 09:58:13 crc kubenswrapper[4735]: E0128 09:58:13.226480 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs podName:77910ebb-c49c-432e-b224-4fbafcaf7853 nodeName:}" failed. No retries permitted until 2026-01-28 09:58:14.226465612 +0000 UTC m=+947.376129985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs") pod "openstack-operator-controller-manager-7f5f655bfc-9jbp7" (UID: "77910ebb-c49c-432e-b224-4fbafcaf7853") : secret "webhook-server-cert" not found Jan 28 09:58:13 crc kubenswrapper[4735]: E0128 09:58:13.227115 4735 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 09:58:13 crc kubenswrapper[4735]: E0128 09:58:13.227178 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs podName:77910ebb-c49c-432e-b224-4fbafcaf7853 nodeName:}" failed. No retries permitted until 2026-01-28 09:58:14.227135739 +0000 UTC m=+947.376800112 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs") pod "openstack-operator-controller-manager-7f5f655bfc-9jbp7" (UID: "77910ebb-c49c-432e-b224-4fbafcaf7853") : secret "metrics-server-cert" not found Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.230646 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vlxzd" event={"ID":"ad40699a-df4d-4aef-ba9e-ec411784138f","Type":"ContainerStarted","Data":"9d148c65f920af07473a2b4a70b0d40cacadab5e37a14fe469b968779e8c2d37"} Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.233120 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-5qn45" event={"ID":"23c42560-8419-496c-8b21-ebb1315a7e4d","Type":"ContainerStarted","Data":"f82561706fde573af68031eb7abde7a6a102ebb1bdcb4ed475e5d03c71a97a65"} Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.234996 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-vxtqg" event={"ID":"60a1f78e-f6c9-4335-8545-533014df520c","Type":"ContainerStarted","Data":"7a1edb9062e2ed9be18a215ee1feca5410042f0830dcd1031ee6279d82462fff"} Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.409917 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-lt6sx"] Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.423034 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-b2ths"] Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.428606 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr"] Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.434416 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xds2t"] Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.566200 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-8svwm"] Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.566241 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-fp94q"] Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.589678 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-xvd9h"] Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.636279 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-mz4t7\" (UID: \"8139f6ed-d10a-4966-a7d2-a5a9412bd235\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" Jan 28 09:58:13 crc kubenswrapper[4735]: E0128 09:58:13.637282 4735 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 09:58:13 crc kubenswrapper[4735]: E0128 09:58:13.637328 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert podName:8139f6ed-d10a-4966-a7d2-a5a9412bd235 nodeName:}" failed. No retries permitted until 2026-01-28 09:58:15.637314355 +0000 UTC m=+948.786978728 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert") pod "infra-operator-controller-manager-7d75bc88d5-mz4t7" (UID: "8139f6ed-d10a-4966-a7d2-a5a9412bd235") : secret "infra-operator-webhook-server-cert" not found Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.684177 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-94dd99d7d-6pbvz"] Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.699207 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-767b8bc766-cmtww"] Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.704286 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-bcnkf"] Jan 28 09:58:13 crc kubenswrapper[4735]: W0128 09:58:13.712911 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd401e4f7_afe6_4095_86cc_f43f43130c4b.slice/crio-5e9bf11f91f6d25f3d43512afb9a0e9d31736779962acc8d6fa9122d3473e16e WatchSource:0}: Error finding container 5e9bf11f91f6d25f3d43512afb9a0e9d31736779962acc8d6fa9122d3473e16e: Status 404 returned error can't find the container with id 5e9bf11f91f6d25f3d43512afb9a0e9d31736779962acc8d6fa9122d3473e16e Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.717547 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-2hkvv"] Jan 28 09:58:13 crc kubenswrapper[4735]: W0128 09:58:13.718493 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod351faed5_c166_4766_99af_3c5b728b1f78.slice/crio-2365de5205b1c7c3ef2bb526a52bfe189ef425299de57cd86703211a004eb0b4 WatchSource:0}: Error finding container 2365de5205b1c7c3ef2bb526a52bfe189ef425299de57cd86703211a004eb0b4: Status 404 returned error can't find the container with id 2365de5205b1c7c3ef2bb526a52bfe189ef425299de57cd86703211a004eb0b4 Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.722865 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jbc6s"] Jan 28 09:58:13 crc kubenswrapper[4735]: E0128 09:58:13.743507 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mjkmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-767b8bc766-cmtww_openstack-operators(733ac7cb-682a-4a00-b6eb-404b3010295c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 09:58:13 crc kubenswrapper[4735]: E0128 09:58:13.744684 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-cmtww" podUID="733ac7cb-682a-4a00-b6eb-404b3010295c" Jan 28 09:58:13 crc kubenswrapper[4735]: W0128 09:58:13.745633 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6300756c_e96b_40d5_ab3d_a044cb39016b.slice/crio-7141cb9938ba2c7c670fbae9043c8608c10b5ce2d3b2e725b1366f68902fac2f WatchSource:0}: Error finding container 7141cb9938ba2c7c670fbae9043c8608c10b5ce2d3b2e725b1366f68902fac2f: Status 404 returned error can't find the container with id 7141cb9938ba2c7c670fbae9043c8608c10b5ce2d3b2e725b1366f68902fac2f Jan 28 09:58:13 crc kubenswrapper[4735]: W0128 09:58:13.746246 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24c4a2af_7bb3_4a12_8279_58f7dda0ef7e.slice/crio-dbb7699f4d0339140da8b272f7089d062d40e1e8a7fe6a22c2ccb6f8befcab44 WatchSource:0}: Error finding container dbb7699f4d0339140da8b272f7089d062d40e1e8a7fe6a22c2ccb6f8befcab44: Status 404 returned error can't find the container with id dbb7699f4d0339140da8b272f7089d062d40e1e8a7fe6a22c2ccb6f8befcab44 Jan 28 09:58:13 crc kubenswrapper[4735]: E0128 09:58:13.750213 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ms9zs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-2hkvv_openstack-operators(24c4a2af-7bb3-4a12-8279-58f7dda0ef7e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 09:58:13 crc kubenswrapper[4735]: E0128 09:58:13.750225 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:e75170b995315ff39a52c1ee42fa65486f828a81bad9248710d3361c9847800d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mwrc2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-94dd99d7d-6pbvz_openstack-operators(6300756c-e96b-40d5-ab3d-a044cb39016b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 09:58:13 crc kubenswrapper[4735]: E0128 09:58:13.751829 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-94dd99d7d-6pbvz" podUID="6300756c-e96b-40d5-ab3d-a044cb39016b" Jan 28 09:58:13 crc kubenswrapper[4735]: E0128 09:58:13.751849 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-2hkvv" podUID="24c4a2af-7bb3-4a12-8279-58f7dda0ef7e" Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.825107 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-pcp57"] Jan 28 09:58:13 crc kubenswrapper[4735]: W0128 09:58:13.831701 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4699fc29_624c_482c_9e68_e71bdf957701.slice/crio-feff583f105984b6d742e9ab3de953a475cd65ce661786ad806f3d177142ade9 WatchSource:0}: Error finding container feff583f105984b6d742e9ab3de953a475cd65ce661786ad806f3d177142ade9: Status 404 returned error can't find the container with id feff583f105984b6d742e9ab3de953a475cd65ce661786ad806f3d177142ade9 Jan 28 09:58:13 crc kubenswrapper[4735]: E0128 09:58:13.835731 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fdgff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-pcp57_openstack-operators(4699fc29-624c-482c-9e68-e71bdf957701): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 09:58:13 crc kubenswrapper[4735]: E0128 09:58:13.837023 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pcp57" podUID="4699fc29-624c-482c-9e68-e71bdf957701" Jan 28 09:58:13 crc kubenswrapper[4735]: I0128 09:58:13.943897 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp\" (UID: \"49ce66b9-31db-4d8d-adc4-f7e178cdceec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" Jan 28 09:58:13 crc kubenswrapper[4735]: E0128 09:58:13.944310 4735 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 09:58:13 crc kubenswrapper[4735]: E0128 09:58:13.944440 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert podName:49ce66b9-31db-4d8d-adc4-f7e178cdceec nodeName:}" failed. No retries permitted until 2026-01-28 09:58:15.944419403 +0000 UTC m=+949.094083776 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" (UID: "49ce66b9-31db-4d8d-adc4-f7e178cdceec") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.249556 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.249966 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:14 crc kubenswrapper[4735]: E0128 09:58:14.249743 4735 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 09:58:14 crc kubenswrapper[4735]: E0128 09:58:14.250211 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs podName:77910ebb-c49c-432e-b224-4fbafcaf7853 nodeName:}" failed. No retries permitted until 2026-01-28 09:58:16.250194839 +0000 UTC m=+949.399859212 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs") pod "openstack-operator-controller-manager-7f5f655bfc-9jbp7" (UID: "77910ebb-c49c-432e-b224-4fbafcaf7853") : secret "webhook-server-cert" not found Jan 28 09:58:14 crc kubenswrapper[4735]: E0128 09:58:14.250151 4735 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 09:58:14 crc kubenswrapper[4735]: E0128 09:58:14.250317 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs podName:77910ebb-c49c-432e-b224-4fbafcaf7853 nodeName:}" failed. No retries permitted until 2026-01-28 09:58:16.250298961 +0000 UTC m=+949.399963334 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs") pod "openstack-operator-controller-manager-7f5f655bfc-9jbp7" (UID: "77910ebb-c49c-432e-b224-4fbafcaf7853") : secret "metrics-server-cert" not found Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.262210 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jbc6s" event={"ID":"d401e4f7-afe6-4095-86cc-f43f43130c4b","Type":"ContainerStarted","Data":"5e9bf11f91f6d25f3d43512afb9a0e9d31736779962acc8d6fa9122d3473e16e"} Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.264193 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8svwm" event={"ID":"b91503a2-0819-4923-a8c2-63f35bded503","Type":"ContainerStarted","Data":"95b0a84b3a97f00b74a412e0c31f8cfd70722e7ce3416a428cff80dbb048863e"} Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.266739 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jp9k5" Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.268121 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-94dd99d7d-6pbvz" event={"ID":"6300756c-e96b-40d5-ab3d-a044cb39016b","Type":"ContainerStarted","Data":"7141cb9938ba2c7c670fbae9043c8608c10b5ce2d3b2e725b1366f68902fac2f"} Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.268962 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jp9k5" Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.273303 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xds2t" event={"ID":"99934609-5407-442a-a887-ba47b516506a","Type":"ContainerStarted","Data":"83571fe1b508e656f6c3d9b8bf2280ebd115959346978ad624927eb6e7de16a4"} Jan 28 09:58:14 crc kubenswrapper[4735]: E0128 09:58:14.277393 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:e75170b995315ff39a52c1ee42fa65486f828a81bad9248710d3361c9847800d\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-94dd99d7d-6pbvz" podUID="6300756c-e96b-40d5-ab3d-a044cb39016b" Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.281878 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pxbwq" event={"ID":"4eee1f38-6076-431a-83c4-f6ac2c91bf2b","Type":"ContainerStarted","Data":"8835911a1364b0fcc3304a70a95f6aabfd8382baf980f0bac6b7d3be00b13ab0"} Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.296681 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-2hkvv" event={"ID":"24c4a2af-7bb3-4a12-8279-58f7dda0ef7e","Type":"ContainerStarted","Data":"dbb7699f4d0339140da8b272f7089d062d40e1e8a7fe6a22c2ccb6f8befcab44"} Jan 28 09:58:14 crc kubenswrapper[4735]: E0128 09:58:14.297886 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-2hkvv" podUID="24c4a2af-7bb3-4a12-8279-58f7dda0ef7e" Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.301161 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pcp57" event={"ID":"4699fc29-624c-482c-9e68-e71bdf957701","Type":"ContainerStarted","Data":"feff583f105984b6d742e9ab3de953a475cd65ce661786ad806f3d177142ade9"} Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.303964 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-fp94q" event={"ID":"d3cf4444-7ac3-4c53-9b08-f2ad2d5a7c17","Type":"ContainerStarted","Data":"426f9fcf9fe382a605c941bc884c3d26f40740f5b2c1ab010094720291568a8f"} Jan 28 09:58:14 crc kubenswrapper[4735]: E0128 09:58:14.304619 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pcp57" podUID="4699fc29-624c-482c-9e68-e71bdf957701" Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.305152 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-b2ths" event={"ID":"fa2276dc-647b-427e-9ff5-c25f1d351ee6","Type":"ContainerStarted","Data":"61335e483825c7b0555ef8760efc6d86aa6166ea70d48d50f074b6f7981b2cc7"} Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.307745 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-lt6sx" event={"ID":"dbfb38f5-9834-44f5-a84d-f17bb28538f6","Type":"ContainerStarted","Data":"537dfcad70e0dd2fb36454e2c6a286e8080f2fc77fb6366706e612080c99d620"} Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.309716 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bcnkf" event={"ID":"351faed5-c166-4766-99af-3c5b728b1f78","Type":"ContainerStarted","Data":"2365de5205b1c7c3ef2bb526a52bfe189ef425299de57cd86703211a004eb0b4"} Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.321081 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr" event={"ID":"7ee4898b-8413-497c-b0f8-6003e75d1bd4","Type":"ContainerStarted","Data":"41abd82adf6621bc675f17fbb27b4b81ccb424e12cd0a402e7398ad7005aa2c5"} Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.322216 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-cmtww" event={"ID":"733ac7cb-682a-4a00-b6eb-404b3010295c","Type":"ContainerStarted","Data":"6de74846fc9645ee12179ae269381704508113873402dadbcb35a76af091eefc"} Jan 28 09:58:14 crc kubenswrapper[4735]: E0128 09:58:14.324632 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-cmtww" podUID="733ac7cb-682a-4a00-b6eb-404b3010295c" Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.325878 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-xvd9h" event={"ID":"c0301ee4-c19c-425e-9ba4-f66e2b405835","Type":"ContainerStarted","Data":"4abf1e8b4f4879f9a495894add32abcab243444946f6421671d379fe4e1fc3d5"} Jan 28 09:58:14 crc kubenswrapper[4735]: I0128 09:58:14.395362 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jp9k5" Jan 28 09:58:15 crc kubenswrapper[4735]: E0128 09:58:15.345128 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pcp57" podUID="4699fc29-624c-482c-9e68-e71bdf957701" Jan 28 09:58:15 crc kubenswrapper[4735]: E0128 09:58:15.346421 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:35f1eb96f42069bb8f7c33942fb86b41843ba02803464245c16192ccda3d50e4\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-cmtww" podUID="733ac7cb-682a-4a00-b6eb-404b3010295c" Jan 28 09:58:15 crc kubenswrapper[4735]: E0128 09:58:15.346523 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-2hkvv" podUID="24c4a2af-7bb3-4a12-8279-58f7dda0ef7e" Jan 28 09:58:15 crc kubenswrapper[4735]: E0128 09:58:15.346652 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:e75170b995315ff39a52c1ee42fa65486f828a81bad9248710d3361c9847800d\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-94dd99d7d-6pbvz" podUID="6300756c-e96b-40d5-ab3d-a044cb39016b" Jan 28 09:58:15 crc kubenswrapper[4735]: I0128 09:58:15.437709 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jp9k5" Jan 28 09:58:15 crc kubenswrapper[4735]: I0128 09:58:15.498204 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jp9k5"] Jan 28 09:58:15 crc kubenswrapper[4735]: I0128 09:58:15.675575 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-mz4t7\" (UID: \"8139f6ed-d10a-4966-a7d2-a5a9412bd235\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" Jan 28 09:58:15 crc kubenswrapper[4735]: E0128 09:58:15.675789 4735 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 09:58:15 crc kubenswrapper[4735]: E0128 09:58:15.675875 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert podName:8139f6ed-d10a-4966-a7d2-a5a9412bd235 nodeName:}" failed. No retries permitted until 2026-01-28 09:58:19.675857385 +0000 UTC m=+952.825521758 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert") pod "infra-operator-controller-manager-7d75bc88d5-mz4t7" (UID: "8139f6ed-d10a-4966-a7d2-a5a9412bd235") : secret "infra-operator-webhook-server-cert" not found Jan 28 09:58:15 crc kubenswrapper[4735]: I0128 09:58:15.979197 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp\" (UID: \"49ce66b9-31db-4d8d-adc4-f7e178cdceec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" Jan 28 09:58:15 crc kubenswrapper[4735]: E0128 09:58:15.979395 4735 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 09:58:15 crc kubenswrapper[4735]: E0128 09:58:15.979442 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert podName:49ce66b9-31db-4d8d-adc4-f7e178cdceec nodeName:}" failed. No retries permitted until 2026-01-28 09:58:19.979428115 +0000 UTC m=+953.129092488 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" (UID: "49ce66b9-31db-4d8d-adc4-f7e178cdceec") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 09:58:16 crc kubenswrapper[4735]: I0128 09:58:16.283178 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:16 crc kubenswrapper[4735]: I0128 09:58:16.283248 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:16 crc kubenswrapper[4735]: E0128 09:58:16.283376 4735 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 09:58:16 crc kubenswrapper[4735]: E0128 09:58:16.283439 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs podName:77910ebb-c49c-432e-b224-4fbafcaf7853 nodeName:}" failed. No retries permitted until 2026-01-28 09:58:20.283421666 +0000 UTC m=+953.433086039 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs") pod "openstack-operator-controller-manager-7f5f655bfc-9jbp7" (UID: "77910ebb-c49c-432e-b224-4fbafcaf7853") : secret "webhook-server-cert" not found Jan 28 09:58:16 crc kubenswrapper[4735]: E0128 09:58:16.283454 4735 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 09:58:16 crc kubenswrapper[4735]: E0128 09:58:16.283503 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs podName:77910ebb-c49c-432e-b224-4fbafcaf7853 nodeName:}" failed. No retries permitted until 2026-01-28 09:58:20.283488578 +0000 UTC m=+953.433152951 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs") pod "openstack-operator-controller-manager-7f5f655bfc-9jbp7" (UID: "77910ebb-c49c-432e-b224-4fbafcaf7853") : secret "metrics-server-cert" not found Jan 28 09:58:17 crc kubenswrapper[4735]: I0128 09:58:17.354055 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jp9k5" podUID="e5d8fc87-b64d-48ec-95e2-20a73de5be8c" containerName="registry-server" containerID="cri-o://26f83ad8cf352e831e2142c036b99f83f05948bd7928813ee77b2fd3ae72215d" gracePeriod=2 Jan 28 09:58:18 crc kubenswrapper[4735]: I0128 09:58:18.366833 4735 generic.go:334] "Generic (PLEG): container finished" podID="e5d8fc87-b64d-48ec-95e2-20a73de5be8c" containerID="26f83ad8cf352e831e2142c036b99f83f05948bd7928813ee77b2fd3ae72215d" exitCode=0 Jan 28 09:58:18 crc kubenswrapper[4735]: I0128 09:58:18.366919 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jp9k5" event={"ID":"e5d8fc87-b64d-48ec-95e2-20a73de5be8c","Type":"ContainerDied","Data":"26f83ad8cf352e831e2142c036b99f83f05948bd7928813ee77b2fd3ae72215d"} Jan 28 09:58:19 crc kubenswrapper[4735]: I0128 09:58:19.737002 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-mz4t7\" (UID: \"8139f6ed-d10a-4966-a7d2-a5a9412bd235\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" Jan 28 09:58:19 crc kubenswrapper[4735]: E0128 09:58:19.737150 4735 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 09:58:19 crc kubenswrapper[4735]: E0128 09:58:19.737720 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert podName:8139f6ed-d10a-4966-a7d2-a5a9412bd235 nodeName:}" failed. No retries permitted until 2026-01-28 09:58:27.737689293 +0000 UTC m=+960.887353696 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert") pod "infra-operator-controller-manager-7d75bc88d5-mz4t7" (UID: "8139f6ed-d10a-4966-a7d2-a5a9412bd235") : secret "infra-operator-webhook-server-cert" not found Jan 28 09:58:20 crc kubenswrapper[4735]: I0128 09:58:20.041681 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp\" (UID: \"49ce66b9-31db-4d8d-adc4-f7e178cdceec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" Jan 28 09:58:20 crc kubenswrapper[4735]: E0128 09:58:20.041905 4735 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 09:58:20 crc kubenswrapper[4735]: E0128 09:58:20.042003 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert podName:49ce66b9-31db-4d8d-adc4-f7e178cdceec nodeName:}" failed. No retries permitted until 2026-01-28 09:58:28.041981631 +0000 UTC m=+961.191646014 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" (UID: "49ce66b9-31db-4d8d-adc4-f7e178cdceec") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 09:58:20 crc kubenswrapper[4735]: I0128 09:58:20.346990 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:20 crc kubenswrapper[4735]: I0128 09:58:20.347080 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:20 crc kubenswrapper[4735]: E0128 09:58:20.347242 4735 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 09:58:20 crc kubenswrapper[4735]: E0128 09:58:20.347337 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs podName:77910ebb-c49c-432e-b224-4fbafcaf7853 nodeName:}" failed. No retries permitted until 2026-01-28 09:58:28.347315636 +0000 UTC m=+961.496980069 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs") pod "openstack-operator-controller-manager-7f5f655bfc-9jbp7" (UID: "77910ebb-c49c-432e-b224-4fbafcaf7853") : secret "metrics-server-cert" not found Jan 28 09:58:20 crc kubenswrapper[4735]: E0128 09:58:20.347242 4735 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 09:58:20 crc kubenswrapper[4735]: E0128 09:58:20.347488 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs podName:77910ebb-c49c-432e-b224-4fbafcaf7853 nodeName:}" failed. No retries permitted until 2026-01-28 09:58:28.34747194 +0000 UTC m=+961.497136393 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs") pod "openstack-operator-controller-manager-7f5f655bfc-9jbp7" (UID: "77910ebb-c49c-432e-b224-4fbafcaf7853") : secret "webhook-server-cert" not found Jan 28 09:58:24 crc kubenswrapper[4735]: E0128 09:58:24.267343 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 26f83ad8cf352e831e2142c036b99f83f05948bd7928813ee77b2fd3ae72215d is running failed: container process not found" containerID="26f83ad8cf352e831e2142c036b99f83f05948bd7928813ee77b2fd3ae72215d" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 09:58:24 crc kubenswrapper[4735]: E0128 09:58:24.267979 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 26f83ad8cf352e831e2142c036b99f83f05948bd7928813ee77b2fd3ae72215d is running failed: container process not found" containerID="26f83ad8cf352e831e2142c036b99f83f05948bd7928813ee77b2fd3ae72215d" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 09:58:24 crc kubenswrapper[4735]: E0128 09:58:24.268411 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 26f83ad8cf352e831e2142c036b99f83f05948bd7928813ee77b2fd3ae72215d is running failed: container process not found" containerID="26f83ad8cf352e831e2142c036b99f83f05948bd7928813ee77b2fd3ae72215d" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 09:58:24 crc kubenswrapper[4735]: E0128 09:58:24.268458 4735 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 26f83ad8cf352e831e2142c036b99f83f05948bd7928813ee77b2fd3ae72215d is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-jp9k5" podUID="e5d8fc87-b64d-48ec-95e2-20a73de5be8c" containerName="registry-server" Jan 28 09:58:27 crc kubenswrapper[4735]: I0128 09:58:27.755815 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-mz4t7\" (UID: \"8139f6ed-d10a-4966-a7d2-a5a9412bd235\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" Jan 28 09:58:27 crc kubenswrapper[4735]: I0128 09:58:27.781764 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8139f6ed-d10a-4966-a7d2-a5a9412bd235-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-mz4t7\" (UID: \"8139f6ed-d10a-4966-a7d2-a5a9412bd235\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" Jan 28 09:58:27 crc kubenswrapper[4735]: I0128 09:58:27.797919 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9n825" Jan 28 09:58:27 crc kubenswrapper[4735]: I0128 09:58:27.806676 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" Jan 28 09:58:28 crc kubenswrapper[4735]: I0128 09:58:28.059136 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp\" (UID: \"49ce66b9-31db-4d8d-adc4-f7e178cdceec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" Jan 28 09:58:28 crc kubenswrapper[4735]: E0128 09:58:28.059323 4735 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 09:58:28 crc kubenswrapper[4735]: E0128 09:58:28.059411 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert podName:49ce66b9-31db-4d8d-adc4-f7e178cdceec nodeName:}" failed. No retries permitted until 2026-01-28 09:58:44.059388873 +0000 UTC m=+977.209053306 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" (UID: "49ce66b9-31db-4d8d-adc4-f7e178cdceec") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 09:58:28 crc kubenswrapper[4735]: E0128 09:58:28.332702 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/ironic-operator@sha256:30e2224475338d3a02d617ae147dc7dc09867cce4ac3543b313a1923c46299fa" Jan 28 09:58:28 crc kubenswrapper[4735]: E0128 09:58:28.332922 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/ironic-operator@sha256:30e2224475338d3a02d617ae147dc7dc09867cce4ac3543b313a1923c46299fa,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4km2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-768b776ffb-gtbjm_openstack-operators(75e33671-8cc3-4470-845b-b45c777c3e9b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 09:58:28 crc kubenswrapper[4735]: E0128 09:58:28.334041 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-gtbjm" podUID="75e33671-8cc3-4470-845b-b45c777c3e9b" Jan 28 09:58:28 crc kubenswrapper[4735]: I0128 09:58:28.362840 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:28 crc kubenswrapper[4735]: I0128 09:58:28.362915 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:28 crc kubenswrapper[4735]: E0128 09:58:28.363053 4735 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 09:58:28 crc kubenswrapper[4735]: E0128 09:58:28.363077 4735 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 09:58:28 crc kubenswrapper[4735]: E0128 09:58:28.363108 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs podName:77910ebb-c49c-432e-b224-4fbafcaf7853 nodeName:}" failed. No retries permitted until 2026-01-28 09:58:44.363094346 +0000 UTC m=+977.512758719 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs") pod "openstack-operator-controller-manager-7f5f655bfc-9jbp7" (UID: "77910ebb-c49c-432e-b224-4fbafcaf7853") : secret "metrics-server-cert" not found Jan 28 09:58:28 crc kubenswrapper[4735]: E0128 09:58:28.363151 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs podName:77910ebb-c49c-432e-b224-4fbafcaf7853 nodeName:}" failed. No retries permitted until 2026-01-28 09:58:44.363131047 +0000 UTC m=+977.512795460 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs") pod "openstack-operator-controller-manager-7f5f655bfc-9jbp7" (UID: "77910ebb-c49c-432e-b224-4fbafcaf7853") : secret "webhook-server-cert" not found Jan 28 09:58:28 crc kubenswrapper[4735]: E0128 09:58:28.459025 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:30e2224475338d3a02d617ae147dc7dc09867cce4ac3543b313a1923c46299fa\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-gtbjm" podUID="75e33671-8cc3-4470-845b-b45c777c3e9b" Jan 28 09:58:28 crc kubenswrapper[4735]: E0128 09:58:28.986411 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/barbican-operator@sha256:44022a4042de334e1f04985eb102df0076ddbe3065e85b243a02a7c509952977" Jan 28 09:58:28 crc kubenswrapper[4735]: E0128 09:58:28.986614 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/barbican-operator@sha256:44022a4042de334e1f04985eb102df0076ddbe3065e85b243a02a7c509952977,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mx6z5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-65ff799cfd-vxtqg_openstack-operators(60a1f78e-f6c9-4335-8545-533014df520c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 09:58:28 crc kubenswrapper[4735]: E0128 09:58:28.988034 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-vxtqg" podUID="60a1f78e-f6c9-4335-8545-533014df520c" Jan 28 09:58:29 crc kubenswrapper[4735]: E0128 09:58:29.451893 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/barbican-operator@sha256:44022a4042de334e1f04985eb102df0076ddbe3065e85b243a02a7c509952977\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-vxtqg" podUID="60a1f78e-f6c9-4335-8545-533014df520c" Jan 28 09:58:29 crc kubenswrapper[4735]: E0128 09:58:29.636418 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/cinder-operator@sha256:7619b8e8814c4d22fcdcc392cdaba2ce279d356fc9263275c91acfba86533591" Jan 28 09:58:29 crc kubenswrapper[4735]: E0128 09:58:29.636633 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/cinder-operator@sha256:7619b8e8814c4d22fcdcc392cdaba2ce279d356fc9263275c91acfba86533591,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v5jbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-655bf9cfbb-pxbwq_openstack-operators(4eee1f38-6076-431a-83c4-f6ac2c91bf2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 09:58:29 crc kubenswrapper[4735]: E0128 09:58:29.637874 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pxbwq" podUID="4eee1f38-6076-431a-83c4-f6ac2c91bf2b" Jan 28 09:58:30 crc kubenswrapper[4735]: E0128 09:58:30.215850 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f" Jan 28 09:58:30 crc kubenswrapper[4735]: E0128 09:58:30.216477 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fzhtq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-77554cdc5c-vlxzd_openstack-operators(ad40699a-df4d-4aef-ba9e-ec411784138f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 09:58:30 crc kubenswrapper[4735]: E0128 09:58:30.218031 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vlxzd" podUID="ad40699a-df4d-4aef-ba9e-ec411784138f" Jan 28 09:58:30 crc kubenswrapper[4735]: E0128 09:58:30.457635 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f\\\"\"" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vlxzd" podUID="ad40699a-df4d-4aef-ba9e-ec411784138f" Jan 28 09:58:30 crc kubenswrapper[4735]: E0128 09:58:30.457987 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/cinder-operator@sha256:7619b8e8814c4d22fcdcc392cdaba2ce279d356fc9263275c91acfba86533591\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pxbwq" podUID="4eee1f38-6076-431a-83c4-f6ac2c91bf2b" Jan 28 09:58:30 crc kubenswrapper[4735]: E0128 09:58:30.697923 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 28 09:58:30 crc kubenswrapper[4735]: E0128 09:58:30.698214 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sfvxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-xvd9h_openstack-operators(c0301ee4-c19c-425e-9ba4-f66e2b405835): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 09:58:30 crc kubenswrapper[4735]: E0128 09:58:30.699444 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-xvd9h" podUID="c0301ee4-c19c-425e-9ba4-f66e2b405835" Jan 28 09:58:31 crc kubenswrapper[4735]: E0128 09:58:31.199922 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20" Jan 28 09:58:31 crc kubenswrapper[4735]: E0128 09:58:31.200137 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h6tk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-799bc87c89-bcnkf_openstack-operators(351faed5-c166-4766-99af-3c5b728b1f78): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 09:58:31 crc kubenswrapper[4735]: E0128 09:58:31.201325 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bcnkf" podUID="351faed5-c166-4766-99af-3c5b728b1f78" Jan 28 09:58:31 crc kubenswrapper[4735]: E0128 09:58:31.464271 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bcnkf" podUID="351faed5-c166-4766-99af-3c5b728b1f78" Jan 28 09:58:31 crc kubenswrapper[4735]: E0128 09:58:31.466178 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-xvd9h" podUID="c0301ee4-c19c-425e-9ba4-f66e2b405835" Jan 28 09:58:31 crc kubenswrapper[4735]: E0128 09:58:31.793068 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84" Jan 28 09:58:31 crc kubenswrapper[4735]: E0128 09:58:31.793316 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rldxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-849fcfbb6b-lt6sx_openstack-operators(dbfb38f5-9834-44f5-a84d-f17bb28538f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 09:58:31 crc kubenswrapper[4735]: E0128 09:58:31.794483 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-lt6sx" podUID="dbfb38f5-9834-44f5-a84d-f17bb28538f6" Jan 28 09:58:32 crc kubenswrapper[4735]: E0128 09:58:32.323444 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 28 09:58:32 crc kubenswrapper[4735]: E0128 09:58:32.323962 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t8ntp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr_openstack-operators(7ee4898b-8413-497c-b0f8-6003e75d1bd4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 09:58:32 crc kubenswrapper[4735]: E0128 09:58:32.325311 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr" podUID="7ee4898b-8413-497c-b0f8-6003e75d1bd4" Jan 28 09:58:32 crc kubenswrapper[4735]: E0128 09:58:32.473133 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84\\\"\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-lt6sx" podUID="dbfb38f5-9834-44f5-a84d-f17bb28538f6" Jan 28 09:58:32 crc kubenswrapper[4735]: E0128 09:58:32.473376 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr" podUID="7ee4898b-8413-497c-b0f8-6003e75d1bd4" Jan 28 09:58:34 crc kubenswrapper[4735]: E0128 09:58:34.069522 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61" Jan 28 09:58:34 crc kubenswrapper[4735]: E0128 09:58:34.069997 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lxs85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-ddcbfd695-fp94q_openstack-operators(d3cf4444-7ac3-4c53-9b08-f2ad2d5a7c17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 09:58:34 crc kubenswrapper[4735]: E0128 09:58:34.071174 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-fp94q" podUID="d3cf4444-7ac3-4c53-9b08-f2ad2d5a7c17" Jan 28 09:58:34 crc kubenswrapper[4735]: I0128 09:58:34.108802 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jp9k5" Jan 28 09:58:34 crc kubenswrapper[4735]: I0128 09:58:34.271575 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-catalog-content\") pod \"e5d8fc87-b64d-48ec-95e2-20a73de5be8c\" (UID: \"e5d8fc87-b64d-48ec-95e2-20a73de5be8c\") " Jan 28 09:58:34 crc kubenswrapper[4735]: I0128 09:58:34.271630 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-utilities\") pod \"e5d8fc87-b64d-48ec-95e2-20a73de5be8c\" (UID: \"e5d8fc87-b64d-48ec-95e2-20a73de5be8c\") " Jan 28 09:58:34 crc kubenswrapper[4735]: I0128 09:58:34.271650 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dz24\" (UniqueName: \"kubernetes.io/projected/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-kube-api-access-7dz24\") pod \"e5d8fc87-b64d-48ec-95e2-20a73de5be8c\" (UID: \"e5d8fc87-b64d-48ec-95e2-20a73de5be8c\") " Jan 28 09:58:34 crc kubenswrapper[4735]: I0128 09:58:34.272980 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-utilities" (OuterVolumeSpecName: "utilities") pod "e5d8fc87-b64d-48ec-95e2-20a73de5be8c" (UID: "e5d8fc87-b64d-48ec-95e2-20a73de5be8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:58:34 crc kubenswrapper[4735]: I0128 09:58:34.291240 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-kube-api-access-7dz24" (OuterVolumeSpecName: "kube-api-access-7dz24") pod "e5d8fc87-b64d-48ec-95e2-20a73de5be8c" (UID: "e5d8fc87-b64d-48ec-95e2-20a73de5be8c"). InnerVolumeSpecName "kube-api-access-7dz24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:58:34 crc kubenswrapper[4735]: I0128 09:58:34.320026 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5d8fc87-b64d-48ec-95e2-20a73de5be8c" (UID: "e5d8fc87-b64d-48ec-95e2-20a73de5be8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 09:58:34 crc kubenswrapper[4735]: I0128 09:58:34.373608 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 09:58:34 crc kubenswrapper[4735]: I0128 09:58:34.373637 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dz24\" (UniqueName: \"kubernetes.io/projected/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-kube-api-access-7dz24\") on node \"crc\" DevicePath \"\"" Jan 28 09:58:34 crc kubenswrapper[4735]: I0128 09:58:34.373649 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d8fc87-b64d-48ec-95e2-20a73de5be8c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 09:58:34 crc kubenswrapper[4735]: I0128 09:58:34.483903 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jp9k5" event={"ID":"e5d8fc87-b64d-48ec-95e2-20a73de5be8c","Type":"ContainerDied","Data":"1a931080a14d2bf66fd1a6dd8e7490cec1983718e1a8e7a2b78af4c2c374e19e"} Jan 28 09:58:34 crc kubenswrapper[4735]: I0128 09:58:34.484010 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jp9k5" Jan 28 09:58:34 crc kubenswrapper[4735]: I0128 09:58:34.484066 4735 scope.go:117] "RemoveContainer" containerID="26f83ad8cf352e831e2142c036b99f83f05948bd7928813ee77b2fd3ae72215d" Jan 28 09:58:34 crc kubenswrapper[4735]: E0128 09:58:34.485553 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61\\\"\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-fp94q" podUID="d3cf4444-7ac3-4c53-9b08-f2ad2d5a7c17" Jan 28 09:58:34 crc kubenswrapper[4735]: I0128 09:58:34.531381 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jp9k5"] Jan 28 09:58:34 crc kubenswrapper[4735]: I0128 09:58:34.536515 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jp9k5"] Jan 28 09:58:35 crc kubenswrapper[4735]: E0128 09:58:35.090921 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487" Jan 28 09:58:35 crc kubenswrapper[4735]: E0128 09:58:35.091435 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mrs9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-55f684fd56-b2ths_openstack-operators(fa2276dc-647b-427e-9ff5-c25f1d351ee6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 09:58:35 crc kubenswrapper[4735]: E0128 09:58:35.092643 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-b2ths" podUID="fa2276dc-647b-427e-9ff5-c25f1d351ee6" Jan 28 09:58:35 crc kubenswrapper[4735]: I0128 09:58:35.429755 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 09:58:35 crc kubenswrapper[4735]: I0128 09:58:35.429813 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 09:58:35 crc kubenswrapper[4735]: E0128 09:58:35.490350 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-b2ths" podUID="fa2276dc-647b-427e-9ff5-c25f1d351ee6" Jan 28 09:58:35 crc kubenswrapper[4735]: I0128 09:58:35.519335 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d8fc87-b64d-48ec-95e2-20a73de5be8c" path="/var/lib/kubelet/pods/e5d8fc87-b64d-48ec-95e2-20a73de5be8c/volumes" Jan 28 09:58:35 crc kubenswrapper[4735]: E0128 09:58:35.999531 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 28 09:58:36 crc kubenswrapper[4735]: E0128 09:58:35.999977 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h45xl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-jbc6s_openstack-operators(d401e4f7-afe6-4095-86cc-f43f43130c4b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 09:58:36 crc kubenswrapper[4735]: E0128 09:58:36.002071 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jbc6s" podUID="d401e4f7-afe6-4095-86cc-f43f43130c4b" Jan 28 09:58:36 crc kubenswrapper[4735]: E0128 09:58:36.497995 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jbc6s" podUID="d401e4f7-afe6-4095-86cc-f43f43130c4b" Jan 28 09:58:37 crc kubenswrapper[4735]: I0128 09:58:37.240861 4735 scope.go:117] "RemoveContainer" containerID="566884dfc0125e70b8bda9be3bb7a49200c85fa3a94fe30ca634947ba8b402f5" Jan 28 09:58:37 crc kubenswrapper[4735]: I0128 09:58:37.301644 4735 scope.go:117] "RemoveContainer" containerID="a7a2db7dc67ff583dabd390c004095e0893b6a140ab7e7ccac783139bbdfd713" Jan 28 09:58:37 crc kubenswrapper[4735]: I0128 09:58:37.523761 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7"] Jan 28 09:58:37 crc kubenswrapper[4735]: W0128 09:58:37.533970 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8139f6ed_d10a_4966_a7d2_a5a9412bd235.slice/crio-f56c1533042fae6465217dc50f3ab366cd8984a7887a0eeaf40d39bf42a4df7c WatchSource:0}: Error finding container f56c1533042fae6465217dc50f3ab366cd8984a7887a0eeaf40d39bf42a4df7c: Status 404 returned error can't find the container with id f56c1533042fae6465217dc50f3ab366cd8984a7887a0eeaf40d39bf42a4df7c Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.521218 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-cmtww" event={"ID":"733ac7cb-682a-4a00-b6eb-404b3010295c","Type":"ContainerStarted","Data":"df9e2f98faaeb7d2d520612215f2a9b91043ec16fb84695673505e8f968a58bb"} Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.521432 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-cmtww" Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.522560 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-2hkvv" event={"ID":"24c4a2af-7bb3-4a12-8279-58f7dda0ef7e","Type":"ContainerStarted","Data":"fad07e152efa86c84238764c6117bab87ed817216df6432cd2911cb5fa7041aa"} Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.522711 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-2hkvv" Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.523693 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pcp57" event={"ID":"4699fc29-624c-482c-9e68-e71bdf957701","Type":"ContainerStarted","Data":"a941efae5b018427333d74d2ac7bccd32b95a9965acc9d5a3ec85911af763424"} Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.524020 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pcp57" Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.525611 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8svwm" event={"ID":"b91503a2-0819-4923-a8c2-63f35bded503","Type":"ContainerStarted","Data":"01b722d65ca96159b019137d36bd6f368303fa0929111d500a4d85c6aac18ed8"} Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.525677 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8svwm" Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.530411 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zwpzq" event={"ID":"9ada3a05-5552-4191-ac59-bbeeb2621780","Type":"ContainerStarted","Data":"41e7e4639c8bedb8d54115737f80dfc0487bf2500dfb76d79accfb5f115322a5"} Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.530795 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zwpzq" Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.531967 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-5qn45" event={"ID":"23c42560-8419-496c-8b21-ebb1315a7e4d","Type":"ContainerStarted","Data":"de1845ee3d22ee55504b48c424f4a7f1580198f82b1c036e4fb6940b77aba304"} Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.532267 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-5qn45" Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.537972 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-cmtww" podStartSLOduration=2.947608635 podStartE2EDuration="26.537959094s" podCreationTimestamp="2026-01-28 09:58:12 +0000 UTC" firstStartedPulling="2026-01-28 09:58:13.743325306 +0000 UTC m=+946.892989679" lastFinishedPulling="2026-01-28 09:58:37.333675765 +0000 UTC m=+970.483340138" observedRunningTime="2026-01-28 09:58:38.536665262 +0000 UTC m=+971.686329635" watchObservedRunningTime="2026-01-28 09:58:38.537959094 +0000 UTC m=+971.687623467" Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.539529 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-77khw" event={"ID":"ab02d2a5-6cf8-4f01-8381-2ea58829a846","Type":"ContainerStarted","Data":"f855ad8d36ba14b5df615f8348041697a3a03b1b804b29789c00adf134d27d96"} Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.539953 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-77khw" Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.542601 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" event={"ID":"8139f6ed-d10a-4966-a7d2-a5a9412bd235","Type":"ContainerStarted","Data":"f56c1533042fae6465217dc50f3ab366cd8984a7887a0eeaf40d39bf42a4df7c"} Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.544560 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-94dd99d7d-6pbvz" event={"ID":"6300756c-e96b-40d5-ab3d-a044cb39016b","Type":"ContainerStarted","Data":"4dbf4c75181dff932fc771804f099e72c02c241edeb107aa36d5e2245c405dff"} Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.545148 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-94dd99d7d-6pbvz" Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.547214 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xds2t" event={"ID":"99934609-5407-442a-a887-ba47b516506a","Type":"ContainerStarted","Data":"662f6884331ca29814d3b131bd4ad2f6d317301e959e0ce772def4fe9ffbf751"} Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.547322 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xds2t" Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.554422 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pcp57" podStartSLOduration=3.087426006 podStartE2EDuration="26.554405278s" podCreationTimestamp="2026-01-28 09:58:12 +0000 UTC" firstStartedPulling="2026-01-28 09:58:13.835582011 +0000 UTC m=+946.985246384" lastFinishedPulling="2026-01-28 09:58:37.302561273 +0000 UTC m=+970.452225656" observedRunningTime="2026-01-28 09:58:38.55171119 +0000 UTC m=+971.701375583" watchObservedRunningTime="2026-01-28 09:58:38.554405278 +0000 UTC m=+971.704069651" Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.577023 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8svwm" podStartSLOduration=4.553956226 podStartE2EDuration="27.576997755s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:13.58391383 +0000 UTC m=+946.733578203" lastFinishedPulling="2026-01-28 09:58:36.606955359 +0000 UTC m=+969.756619732" observedRunningTime="2026-01-28 09:58:38.569175489 +0000 UTC m=+971.718839862" watchObservedRunningTime="2026-01-28 09:58:38.576997755 +0000 UTC m=+971.726662128" Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.589607 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zwpzq" podStartSLOduration=3.2422439020000002 podStartE2EDuration="27.589588322s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:12.925784354 +0000 UTC m=+946.075448727" lastFinishedPulling="2026-01-28 09:58:37.273128774 +0000 UTC m=+970.422793147" observedRunningTime="2026-01-28 09:58:38.587007487 +0000 UTC m=+971.736671860" watchObservedRunningTime="2026-01-28 09:58:38.589588322 +0000 UTC m=+971.739252715" Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.610321 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-5qn45" podStartSLOduration=3.8812056679999998 podStartE2EDuration="27.610307072s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:12.872949733 +0000 UTC m=+946.022614106" lastFinishedPulling="2026-01-28 09:58:36.602051137 +0000 UTC m=+969.751715510" observedRunningTime="2026-01-28 09:58:38.606617559 +0000 UTC m=+971.756281932" watchObservedRunningTime="2026-01-28 09:58:38.610307072 +0000 UTC m=+971.759971445" Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.634895 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-2hkvv" podStartSLOduration=4.061830493 podStartE2EDuration="27.634873209s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:13.750078294 +0000 UTC m=+946.899742667" lastFinishedPulling="2026-01-28 09:58:37.32312101 +0000 UTC m=+970.472785383" observedRunningTime="2026-01-28 09:58:38.631869614 +0000 UTC m=+971.781533977" watchObservedRunningTime="2026-01-28 09:58:38.634873209 +0000 UTC m=+971.784537592" Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.652047 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xds2t" podStartSLOduration=4.503069143 podStartE2EDuration="27.652025021s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:13.45795858 +0000 UTC m=+946.607622953" lastFinishedPulling="2026-01-28 09:58:36.606914458 +0000 UTC m=+969.756578831" observedRunningTime="2026-01-28 09:58:38.647362873 +0000 UTC m=+971.797027246" watchObservedRunningTime="2026-01-28 09:58:38.652025021 +0000 UTC m=+971.801689394" Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.661991 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-94dd99d7d-6pbvz" podStartSLOduration=4.074559954 podStartE2EDuration="27.661976161s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:13.750090024 +0000 UTC m=+946.899754397" lastFinishedPulling="2026-01-28 09:58:37.337506231 +0000 UTC m=+970.487170604" observedRunningTime="2026-01-28 09:58:38.661317334 +0000 UTC m=+971.810981717" watchObservedRunningTime="2026-01-28 09:58:38.661976161 +0000 UTC m=+971.811640534" Jan 28 09:58:38 crc kubenswrapper[4735]: I0128 09:58:38.686021 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-77khw" podStartSLOduration=5.924044085 podStartE2EDuration="27.686002824s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:12.828776918 +0000 UTC m=+945.978441291" lastFinishedPulling="2026-01-28 09:58:34.590735657 +0000 UTC m=+967.740400030" observedRunningTime="2026-01-28 09:58:38.681277736 +0000 UTC m=+971.830942109" watchObservedRunningTime="2026-01-28 09:58:38.686002824 +0000 UTC m=+971.835667197" Jan 28 09:58:40 crc kubenswrapper[4735]: I0128 09:58:40.559146 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" event={"ID":"8139f6ed-d10a-4966-a7d2-a5a9412bd235","Type":"ContainerStarted","Data":"c3f67ea91f134b1ccf24b83eebe2121fa7a279f45410b392477cf9bd48bb3548"} Jan 28 09:58:40 crc kubenswrapper[4735]: I0128 09:58:40.559514 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" Jan 28 09:58:40 crc kubenswrapper[4735]: I0128 09:58:40.560387 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-gtbjm" event={"ID":"75e33671-8cc3-4470-845b-b45c777c3e9b","Type":"ContainerStarted","Data":"76e27a33d458a9572c7f6ebdc3c65015419fa8508a7ec243277eb3a46d249d72"} Jan 28 09:58:40 crc kubenswrapper[4735]: I0128 09:58:40.583550 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" podStartSLOduration=27.465550265 podStartE2EDuration="29.583534853s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:37.537728752 +0000 UTC m=+970.687393125" lastFinishedPulling="2026-01-28 09:58:39.65571334 +0000 UTC m=+972.805377713" observedRunningTime="2026-01-28 09:58:40.581011729 +0000 UTC m=+973.730676112" watchObservedRunningTime="2026-01-28 09:58:40.583534853 +0000 UTC m=+973.733199226" Jan 28 09:58:40 crc kubenswrapper[4735]: I0128 09:58:40.600865 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-gtbjm" podStartSLOduration=2.658779666 podStartE2EDuration="29.600843138s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:12.973895657 +0000 UTC m=+946.123560030" lastFinishedPulling="2026-01-28 09:58:39.915959129 +0000 UTC m=+973.065623502" observedRunningTime="2026-01-28 09:58:40.596621012 +0000 UTC m=+973.746285385" watchObservedRunningTime="2026-01-28 09:58:40.600843138 +0000 UTC m=+973.750507511" Jan 28 09:58:42 crc kubenswrapper[4735]: I0128 09:58:42.074411 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-5qn45" Jan 28 09:58:42 crc kubenswrapper[4735]: I0128 09:58:42.102943 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-77khw" Jan 28 09:58:42 crc kubenswrapper[4735]: I0128 09:58:42.130819 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zwpzq" Jan 28 09:58:42 crc kubenswrapper[4735]: I0128 09:58:42.246250 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-gtbjm" Jan 28 09:58:42 crc kubenswrapper[4735]: I0128 09:58:42.449858 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xds2t" Jan 28 09:58:42 crc kubenswrapper[4735]: I0128 09:58:42.554372 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-94dd99d7d-6pbvz" Jan 28 09:58:42 crc kubenswrapper[4735]: I0128 09:58:42.612986 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8svwm" Jan 28 09:58:42 crc kubenswrapper[4735]: I0128 09:58:42.755652 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-2hkvv" Jan 28 09:58:42 crc kubenswrapper[4735]: I0128 09:58:42.893815 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pcp57" Jan 28 09:58:42 crc kubenswrapper[4735]: I0128 09:58:42.941906 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-767b8bc766-cmtww" Jan 28 09:58:43 crc kubenswrapper[4735]: I0128 09:58:43.583794 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vlxzd" event={"ID":"ad40699a-df4d-4aef-ba9e-ec411784138f","Type":"ContainerStarted","Data":"a62f0409350b8300ddcffe21a8ff0b5a4d2a5fc3918dc2cd4b2e981cab00219a"} Jan 28 09:58:43 crc kubenswrapper[4735]: I0128 09:58:43.584001 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vlxzd" Jan 28 09:58:43 crc kubenswrapper[4735]: I0128 09:58:43.585088 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bcnkf" event={"ID":"351faed5-c166-4766-99af-3c5b728b1f78","Type":"ContainerStarted","Data":"cea2021d45c274264971c137bd03cf28b02d6a73a0ca342e1e4c6ee0af0892a1"} Jan 28 09:58:43 crc kubenswrapper[4735]: I0128 09:58:43.585235 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bcnkf" Jan 28 09:58:43 crc kubenswrapper[4735]: I0128 09:58:43.586220 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-vxtqg" event={"ID":"60a1f78e-f6c9-4335-8545-533014df520c","Type":"ContainerStarted","Data":"ca45c2ab117ee3cec8d838a92c53692990876286c8c657b662e6b1023fa99b90"} Jan 28 09:58:43 crc kubenswrapper[4735]: I0128 09:58:43.586373 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-vxtqg" Jan 28 09:58:43 crc kubenswrapper[4735]: I0128 09:58:43.601103 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vlxzd" podStartSLOduration=2.034547637 podStartE2EDuration="32.601082833s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:12.588224684 +0000 UTC m=+945.737889057" lastFinishedPulling="2026-01-28 09:58:43.15475988 +0000 UTC m=+976.304424253" observedRunningTime="2026-01-28 09:58:43.600971351 +0000 UTC m=+976.750635734" watchObservedRunningTime="2026-01-28 09:58:43.601082833 +0000 UTC m=+976.750747206" Jan 28 09:58:43 crc kubenswrapper[4735]: I0128 09:58:43.622409 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-vxtqg" podStartSLOduration=2.215482325 podStartE2EDuration="32.62238772s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:12.507042114 +0000 UTC m=+945.656706487" lastFinishedPulling="2026-01-28 09:58:42.913947499 +0000 UTC m=+976.063611882" observedRunningTime="2026-01-28 09:58:43.618736868 +0000 UTC m=+976.768401241" watchObservedRunningTime="2026-01-28 09:58:43.62238772 +0000 UTC m=+976.772052093" Jan 28 09:58:43 crc kubenswrapper[4735]: I0128 09:58:43.635656 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bcnkf" podStartSLOduration=2.40252182 podStartE2EDuration="31.635631162s" podCreationTimestamp="2026-01-28 09:58:12 +0000 UTC" firstStartedPulling="2026-01-28 09:58:13.725981451 +0000 UTC m=+946.875645814" lastFinishedPulling="2026-01-28 09:58:42.959090783 +0000 UTC m=+976.108755156" observedRunningTime="2026-01-28 09:58:43.634285518 +0000 UTC m=+976.783949891" watchObservedRunningTime="2026-01-28 09:58:43.635631162 +0000 UTC m=+976.785295535" Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.117534 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp\" (UID: \"49ce66b9-31db-4d8d-adc4-f7e178cdceec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.127841 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49ce66b9-31db-4d8d-adc4-f7e178cdceec-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp\" (UID: \"49ce66b9-31db-4d8d-adc4-f7e178cdceec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.388433 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-5zkm2" Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.397904 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.421117 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.421197 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.425497 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-metrics-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.431466 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77910ebb-c49c-432e-b224-4fbafcaf7853-webhook-certs\") pod \"openstack-operator-controller-manager-7f5f655bfc-9jbp7\" (UID: \"77910ebb-c49c-432e-b224-4fbafcaf7853\") " pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.448132 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lvpdx" Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.455983 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.609903 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-xvd9h" event={"ID":"c0301ee4-c19c-425e-9ba4-f66e2b405835","Type":"ContainerStarted","Data":"207a0175bcee2d2e2962002ac358101caac2e89822d19f8d9e5d4826c31215ca"} Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.610201 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-xvd9h" Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.614601 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr" event={"ID":"7ee4898b-8413-497c-b0f8-6003e75d1bd4","Type":"ContainerStarted","Data":"08520ce270ee3e371434a71c2dc0586d16f3be99a571f60e0ad37e2e0adba6a0"} Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.614918 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr" Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.622826 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pxbwq" event={"ID":"4eee1f38-6076-431a-83c4-f6ac2c91bf2b","Type":"ContainerStarted","Data":"dd2d43274494475c979bbfc76b2c39052608888ab32e55253e32e014a259e957"} Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.623025 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pxbwq" Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.636762 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-xvd9h" podStartSLOduration=3.073798519 podStartE2EDuration="33.636725146s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:13.584096224 +0000 UTC m=+946.733760587" lastFinishedPulling="2026-01-28 09:58:44.147022841 +0000 UTC m=+977.296687214" observedRunningTime="2026-01-28 09:58:44.633515816 +0000 UTC m=+977.783180189" watchObservedRunningTime="2026-01-28 09:58:44.636725146 +0000 UTC m=+977.786389519" Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.659123 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pxbwq" podStartSLOduration=2.74398395 podStartE2EDuration="33.659108478s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:13.182937053 +0000 UTC m=+946.332601426" lastFinishedPulling="2026-01-28 09:58:44.098061581 +0000 UTC m=+977.247725954" observedRunningTime="2026-01-28 09:58:44.651530138 +0000 UTC m=+977.801194511" watchObservedRunningTime="2026-01-28 09:58:44.659108478 +0000 UTC m=+977.808772851" Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.705735 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr" podStartSLOduration=3.07049561 podStartE2EDuration="33.705714489s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:13.463569961 +0000 UTC m=+946.613234324" lastFinishedPulling="2026-01-28 09:58:44.09878883 +0000 UTC m=+977.248453203" observedRunningTime="2026-01-28 09:58:44.673121661 +0000 UTC m=+977.822786034" watchObservedRunningTime="2026-01-28 09:58:44.705714489 +0000 UTC m=+977.855378862" Jan 28 09:58:44 crc kubenswrapper[4735]: I0128 09:58:44.908869 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7"] Jan 28 09:58:44 crc kubenswrapper[4735]: W0128 09:58:44.913270 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77910ebb_c49c_432e_b224_4fbafcaf7853.slice/crio-d1db4255c2411cf18ba51f077155ef3238c6a13b42c6ee0bc16b28924b13a205 WatchSource:0}: Error finding container d1db4255c2411cf18ba51f077155ef3238c6a13b42c6ee0bc16b28924b13a205: Status 404 returned error can't find the container with id d1db4255c2411cf18ba51f077155ef3238c6a13b42c6ee0bc16b28924b13a205 Jan 28 09:58:45 crc kubenswrapper[4735]: I0128 09:58:45.042289 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp"] Jan 28 09:58:45 crc kubenswrapper[4735]: W0128 09:58:45.045623 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49ce66b9_31db_4d8d_adc4_f7e178cdceec.slice/crio-7e08f5bd75b754b1e33aa9918f522f50a7ed3f497b02c518758dcf16d766812b WatchSource:0}: Error finding container 7e08f5bd75b754b1e33aa9918f522f50a7ed3f497b02c518758dcf16d766812b: Status 404 returned error can't find the container with id 7e08f5bd75b754b1e33aa9918f522f50a7ed3f497b02c518758dcf16d766812b Jan 28 09:58:45 crc kubenswrapper[4735]: I0128 09:58:45.634849 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" event={"ID":"77910ebb-c49c-432e-b224-4fbafcaf7853","Type":"ContainerStarted","Data":"21b319122e8e11f8cf3dc60fc49f2f3346ba3d7171ac392a21feebba45a6a314"} Jan 28 09:58:45 crc kubenswrapper[4735]: I0128 09:58:45.634900 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" event={"ID":"77910ebb-c49c-432e-b224-4fbafcaf7853","Type":"ContainerStarted","Data":"d1db4255c2411cf18ba51f077155ef3238c6a13b42c6ee0bc16b28924b13a205"} Jan 28 09:58:45 crc kubenswrapper[4735]: I0128 09:58:45.635026 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:58:45 crc kubenswrapper[4735]: I0128 09:58:45.639953 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" event={"ID":"49ce66b9-31db-4d8d-adc4-f7e178cdceec","Type":"ContainerStarted","Data":"7e08f5bd75b754b1e33aa9918f522f50a7ed3f497b02c518758dcf16d766812b"} Jan 28 09:58:45 crc kubenswrapper[4735]: I0128 09:58:45.665963 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" podStartSLOduration=33.665945227 podStartE2EDuration="33.665945227s" podCreationTimestamp="2026-01-28 09:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:58:45.66448742 +0000 UTC m=+978.814151803" watchObservedRunningTime="2026-01-28 09:58:45.665945227 +0000 UTC m=+978.815609600" Jan 28 09:58:47 crc kubenswrapper[4735]: I0128 09:58:47.654979 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" event={"ID":"49ce66b9-31db-4d8d-adc4-f7e178cdceec","Type":"ContainerStarted","Data":"545d771c28cbab6c62932874916930c528ca876fc12ca7d21b932fa7b4b82b5e"} Jan 28 09:58:47 crc kubenswrapper[4735]: I0128 09:58:47.655525 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" Jan 28 09:58:47 crc kubenswrapper[4735]: I0128 09:58:47.682896 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" podStartSLOduration=34.778532797 podStartE2EDuration="36.682876006s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:45.047767085 +0000 UTC m=+978.197431458" lastFinishedPulling="2026-01-28 09:58:46.952110294 +0000 UTC m=+980.101774667" observedRunningTime="2026-01-28 09:58:47.679260225 +0000 UTC m=+980.828924628" watchObservedRunningTime="2026-01-28 09:58:47.682876006 +0000 UTC m=+980.832540379" Jan 28 09:58:47 crc kubenswrapper[4735]: I0128 09:58:47.815334 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-mz4t7" Jan 28 09:58:48 crc kubenswrapper[4735]: I0128 09:58:48.663959 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-lt6sx" event={"ID":"dbfb38f5-9834-44f5-a84d-f17bb28538f6","Type":"ContainerStarted","Data":"d4d652f967e007dfc447d9c6cb0008935be73bcca20b0fe544b638eed76f1962"} Jan 28 09:58:48 crc kubenswrapper[4735]: I0128 09:58:48.664505 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-lt6sx" Jan 28 09:58:48 crc kubenswrapper[4735]: I0128 09:58:48.666017 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-b2ths" event={"ID":"fa2276dc-647b-427e-9ff5-c25f1d351ee6","Type":"ContainerStarted","Data":"2ba70c13fa6701458bb7e30f171a76c01e9fcd9d20ba77e45dc55cccf1b60db8"} Jan 28 09:58:48 crc kubenswrapper[4735]: I0128 09:58:48.666333 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-b2ths" Jan 28 09:58:48 crc kubenswrapper[4735]: I0128 09:58:48.682647 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-lt6sx" podStartSLOduration=3.192872116 podStartE2EDuration="37.682616046s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:13.453471388 +0000 UTC m=+946.603135761" lastFinishedPulling="2026-01-28 09:58:47.943215318 +0000 UTC m=+981.092879691" observedRunningTime="2026-01-28 09:58:48.678698377 +0000 UTC m=+981.828362760" watchObservedRunningTime="2026-01-28 09:58:48.682616046 +0000 UTC m=+981.832280419" Jan 28 09:58:48 crc kubenswrapper[4735]: I0128 09:58:48.702673 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-b2ths" podStartSLOduration=3.2532336539999998 podStartE2EDuration="37.702648829s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:13.449913239 +0000 UTC m=+946.599577612" lastFinishedPulling="2026-01-28 09:58:47.899328414 +0000 UTC m=+981.048992787" observedRunningTime="2026-01-28 09:58:48.694001662 +0000 UTC m=+981.843666035" watchObservedRunningTime="2026-01-28 09:58:48.702648829 +0000 UTC m=+981.852313202" Jan 28 09:58:49 crc kubenswrapper[4735]: I0128 09:58:49.674938 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-fp94q" event={"ID":"d3cf4444-7ac3-4c53-9b08-f2ad2d5a7c17","Type":"ContainerStarted","Data":"f71c24ce0aa7c446388a0b2ca40d7eb144e4330cd786315934f90586278c3094"} Jan 28 09:58:49 crc kubenswrapper[4735]: I0128 09:58:49.675560 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-fp94q" Jan 28 09:58:49 crc kubenswrapper[4735]: I0128 09:58:49.702036 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-fp94q" podStartSLOduration=3.383465239 podStartE2EDuration="38.70200388s" podCreationTimestamp="2026-01-28 09:58:11 +0000 UTC" firstStartedPulling="2026-01-28 09:58:13.588463043 +0000 UTC m=+946.738127416" lastFinishedPulling="2026-01-28 09:58:48.907001684 +0000 UTC m=+982.056666057" observedRunningTime="2026-01-28 09:58:49.691796632 +0000 UTC m=+982.841461015" watchObservedRunningTime="2026-01-28 09:58:49.70200388 +0000 UTC m=+982.851668303" Jan 28 09:58:50 crc kubenswrapper[4735]: I0128 09:58:50.681634 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jbc6s" event={"ID":"d401e4f7-afe6-4095-86cc-f43f43130c4b","Type":"ContainerStarted","Data":"8ce0ebdf79d2e97469c877e088e4d748d67ab51fe0effd3d4f747af97238cbcc"} Jan 28 09:58:50 crc kubenswrapper[4735]: I0128 09:58:50.705329 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jbc6s" podStartSLOduration=2.514668189 podStartE2EDuration="38.705305818s" podCreationTimestamp="2026-01-28 09:58:12 +0000 UTC" firstStartedPulling="2026-01-28 09:58:13.715592762 +0000 UTC m=+946.865257135" lastFinishedPulling="2026-01-28 09:58:49.906230391 +0000 UTC m=+983.055894764" observedRunningTime="2026-01-28 09:58:50.699721118 +0000 UTC m=+983.849385491" watchObservedRunningTime="2026-01-28 09:58:50.705305818 +0000 UTC m=+983.854970211" Jan 28 09:58:51 crc kubenswrapper[4735]: I0128 09:58:51.977352 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-vxtqg" Jan 28 09:58:52 crc kubenswrapper[4735]: I0128 09:58:52.039954 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-vlxzd" Jan 28 09:58:52 crc kubenswrapper[4735]: I0128 09:58:52.249731 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-gtbjm" Jan 28 09:58:52 crc kubenswrapper[4735]: I0128 09:58:52.302572 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pxbwq" Jan 28 09:58:52 crc kubenswrapper[4735]: I0128 09:58:52.440188 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr" Jan 28 09:58:52 crc kubenswrapper[4735]: I0128 09:58:52.574806 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-xvd9h" Jan 28 09:58:52 crc kubenswrapper[4735]: I0128 09:58:52.805016 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bcnkf" Jan 28 09:58:54 crc kubenswrapper[4735]: I0128 09:58:54.405647 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp" Jan 28 09:58:54 crc kubenswrapper[4735]: I0128 09:58:54.473465 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7f5f655bfc-9jbp7" Jan 28 09:59:02 crc kubenswrapper[4735]: I0128 09:59:02.274280 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-b2ths" Jan 28 09:59:02 crc kubenswrapper[4735]: I0128 09:59:02.294008 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-lt6sx" Jan 28 09:59:02 crc kubenswrapper[4735]: I0128 09:59:02.497437 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-fp94q" Jan 28 09:59:05 crc kubenswrapper[4735]: I0128 09:59:05.429862 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 09:59:05 crc kubenswrapper[4735]: I0128 09:59:05.429955 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 09:59:05 crc kubenswrapper[4735]: I0128 09:59:05.430035 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 09:59:05 crc kubenswrapper[4735]: I0128 09:59:05.430949 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"426bd6318f22bcaa2e01148fa51d1f918d9825a66fecb193aef53ed2e7aff201"} pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 09:59:05 crc kubenswrapper[4735]: I0128 09:59:05.431063 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" containerID="cri-o://426bd6318f22bcaa2e01148fa51d1f918d9825a66fecb193aef53ed2e7aff201" gracePeriod=600 Jan 28 09:59:05 crc kubenswrapper[4735]: I0128 09:59:05.804492 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerID="426bd6318f22bcaa2e01148fa51d1f918d9825a66fecb193aef53ed2e7aff201" exitCode=0 Jan 28 09:59:05 crc kubenswrapper[4735]: I0128 09:59:05.804575 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerDied","Data":"426bd6318f22bcaa2e01148fa51d1f918d9825a66fecb193aef53ed2e7aff201"} Jan 28 09:59:05 crc kubenswrapper[4735]: I0128 09:59:05.804901 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"e4a04c510b9b270008568de903a447cbfe28b5ec6b75521aed00c265673e243e"} Jan 28 09:59:05 crc kubenswrapper[4735]: I0128 09:59:05.804925 4735 scope.go:117] "RemoveContainer" containerID="ffb90dbb3d67df0d3c11867f5146dc5bfc3eacdd73311f14c6982a1e91ecde2d" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.220195 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6st6d"] Jan 28 09:59:16 crc kubenswrapper[4735]: E0128 09:59:16.223188 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d8fc87-b64d-48ec-95e2-20a73de5be8c" containerName="extract-utilities" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.223226 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d8fc87-b64d-48ec-95e2-20a73de5be8c" containerName="extract-utilities" Jan 28 09:59:16 crc kubenswrapper[4735]: E0128 09:59:16.223251 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d8fc87-b64d-48ec-95e2-20a73de5be8c" containerName="extract-content" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.223258 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d8fc87-b64d-48ec-95e2-20a73de5be8c" containerName="extract-content" Jan 28 09:59:16 crc kubenswrapper[4735]: E0128 09:59:16.223279 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d8fc87-b64d-48ec-95e2-20a73de5be8c" containerName="registry-server" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.223285 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d8fc87-b64d-48ec-95e2-20a73de5be8c" containerName="registry-server" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.223582 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d8fc87-b64d-48ec-95e2-20a73de5be8c" containerName="registry-server" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.231980 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6st6d"] Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.232519 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6st6d" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.235151 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.236176 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5h94t" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.236769 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.236847 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.263313 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-htdnd"] Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.264419 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-htdnd" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.268927 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.278039 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-htdnd"] Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.400045 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-htdnd\" (UID: \"125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-htdnd" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.400122 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss8fd\" (UniqueName: \"kubernetes.io/projected/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-kube-api-access-ss8fd\") pod \"dnsmasq-dns-78dd6ddcc-htdnd\" (UID: \"125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-htdnd" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.400165 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9139575f-a07b-4c92-971d-9c0cb234c43d-config\") pod \"dnsmasq-dns-675f4bcbfc-6st6d\" (UID: \"9139575f-a07b-4c92-971d-9c0cb234c43d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6st6d" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.400217 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tnql\" (UniqueName: \"kubernetes.io/projected/9139575f-a07b-4c92-971d-9c0cb234c43d-kube-api-access-7tnql\") pod \"dnsmasq-dns-675f4bcbfc-6st6d\" (UID: \"9139575f-a07b-4c92-971d-9c0cb234c43d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6st6d" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.400286 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-config\") pod \"dnsmasq-dns-78dd6ddcc-htdnd\" (UID: \"125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-htdnd" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.501840 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tnql\" (UniqueName: \"kubernetes.io/projected/9139575f-a07b-4c92-971d-9c0cb234c43d-kube-api-access-7tnql\") pod \"dnsmasq-dns-675f4bcbfc-6st6d\" (UID: \"9139575f-a07b-4c92-971d-9c0cb234c43d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6st6d" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.501981 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-config\") pod \"dnsmasq-dns-78dd6ddcc-htdnd\" (UID: \"125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-htdnd" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.502067 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-htdnd\" (UID: \"125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-htdnd" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.502134 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss8fd\" (UniqueName: \"kubernetes.io/projected/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-kube-api-access-ss8fd\") pod \"dnsmasq-dns-78dd6ddcc-htdnd\" (UID: \"125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-htdnd" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.502187 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9139575f-a07b-4c92-971d-9c0cb234c43d-config\") pod \"dnsmasq-dns-675f4bcbfc-6st6d\" (UID: \"9139575f-a07b-4c92-971d-9c0cb234c43d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6st6d" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.503906 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9139575f-a07b-4c92-971d-9c0cb234c43d-config\") pod \"dnsmasq-dns-675f4bcbfc-6st6d\" (UID: \"9139575f-a07b-4c92-971d-9c0cb234c43d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6st6d" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.505927 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-htdnd\" (UID: \"125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-htdnd" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.506868 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-config\") pod \"dnsmasq-dns-78dd6ddcc-htdnd\" (UID: \"125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-htdnd" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.534545 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tnql\" (UniqueName: \"kubernetes.io/projected/9139575f-a07b-4c92-971d-9c0cb234c43d-kube-api-access-7tnql\") pod \"dnsmasq-dns-675f4bcbfc-6st6d\" (UID: \"9139575f-a07b-4c92-971d-9c0cb234c43d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6st6d" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.534981 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss8fd\" (UniqueName: \"kubernetes.io/projected/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-kube-api-access-ss8fd\") pod \"dnsmasq-dns-78dd6ddcc-htdnd\" (UID: \"125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-htdnd" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.560710 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6st6d" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.582016 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-htdnd" Jan 28 09:59:16 crc kubenswrapper[4735]: I0128 09:59:16.996711 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6st6d"] Jan 28 09:59:17 crc kubenswrapper[4735]: W0128 09:59:17.001899 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9139575f_a07b_4c92_971d_9c0cb234c43d.slice/crio-441d3f3fe316d3d6b93575e3b68b9da1eacb694c1e74610dd6c1a6598f1c1c0a WatchSource:0}: Error finding container 441d3f3fe316d3d6b93575e3b68b9da1eacb694c1e74610dd6c1a6598f1c1c0a: Status 404 returned error can't find the container with id 441d3f3fe316d3d6b93575e3b68b9da1eacb694c1e74610dd6c1a6598f1c1c0a Jan 28 09:59:17 crc kubenswrapper[4735]: I0128 09:59:17.062298 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-htdnd"] Jan 28 09:59:17 crc kubenswrapper[4735]: W0128 09:59:17.066953 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod125d5183_f2c8_4c26_bc4d_cc6c0b22e8a4.slice/crio-314486b7957175bc441012dfcafd4d4421d9808e875bb1b0ddca60d92259af25 WatchSource:0}: Error finding container 314486b7957175bc441012dfcafd4d4421d9808e875bb1b0ddca60d92259af25: Status 404 returned error can't find the container with id 314486b7957175bc441012dfcafd4d4421d9808e875bb1b0ddca60d92259af25 Jan 28 09:59:17 crc kubenswrapper[4735]: I0128 09:59:17.888796 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6st6d" event={"ID":"9139575f-a07b-4c92-971d-9c0cb234c43d","Type":"ContainerStarted","Data":"441d3f3fe316d3d6b93575e3b68b9da1eacb694c1e74610dd6c1a6598f1c1c0a"} Jan 28 09:59:17 crc kubenswrapper[4735]: I0128 09:59:17.890318 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-htdnd" event={"ID":"125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4","Type":"ContainerStarted","Data":"314486b7957175bc441012dfcafd4d4421d9808e875bb1b0ddca60d92259af25"} Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.195071 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6st6d"] Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.225528 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6k6cm"] Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.226883 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6k6cm" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.234561 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6k6cm"] Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.342680 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdp66\" (UniqueName: \"kubernetes.io/projected/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-kube-api-access-kdp66\") pod \"dnsmasq-dns-666b6646f7-6k6cm\" (UID: \"64a4167f-5e4f-4110-8a0f-6aeb34dd4034\") " pod="openstack/dnsmasq-dns-666b6646f7-6k6cm" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.342759 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6k6cm\" (UID: \"64a4167f-5e4f-4110-8a0f-6aeb34dd4034\") " pod="openstack/dnsmasq-dns-666b6646f7-6k6cm" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.342792 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-config\") pod \"dnsmasq-dns-666b6646f7-6k6cm\" (UID: \"64a4167f-5e4f-4110-8a0f-6aeb34dd4034\") " pod="openstack/dnsmasq-dns-666b6646f7-6k6cm" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.443814 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdp66\" (UniqueName: \"kubernetes.io/projected/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-kube-api-access-kdp66\") pod \"dnsmasq-dns-666b6646f7-6k6cm\" (UID: \"64a4167f-5e4f-4110-8a0f-6aeb34dd4034\") " pod="openstack/dnsmasq-dns-666b6646f7-6k6cm" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.443889 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6k6cm\" (UID: \"64a4167f-5e4f-4110-8a0f-6aeb34dd4034\") " pod="openstack/dnsmasq-dns-666b6646f7-6k6cm" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.443918 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-config\") pod \"dnsmasq-dns-666b6646f7-6k6cm\" (UID: \"64a4167f-5e4f-4110-8a0f-6aeb34dd4034\") " pod="openstack/dnsmasq-dns-666b6646f7-6k6cm" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.444766 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-config\") pod \"dnsmasq-dns-666b6646f7-6k6cm\" (UID: \"64a4167f-5e4f-4110-8a0f-6aeb34dd4034\") " pod="openstack/dnsmasq-dns-666b6646f7-6k6cm" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.444900 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-dns-svc\") pod \"dnsmasq-dns-666b6646f7-6k6cm\" (UID: \"64a4167f-5e4f-4110-8a0f-6aeb34dd4034\") " pod="openstack/dnsmasq-dns-666b6646f7-6k6cm" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.477941 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdp66\" (UniqueName: \"kubernetes.io/projected/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-kube-api-access-kdp66\") pod \"dnsmasq-dns-666b6646f7-6k6cm\" (UID: \"64a4167f-5e4f-4110-8a0f-6aeb34dd4034\") " pod="openstack/dnsmasq-dns-666b6646f7-6k6cm" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.534151 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-htdnd"] Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.565939 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6k6cm" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.567061 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-455hk"] Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.569081 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-455hk" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.579221 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-455hk"] Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.650672 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/370e2ab9-a959-4009-8f5b-8973f062cf54-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-455hk\" (UID: \"370e2ab9-a959-4009-8f5b-8973f062cf54\") " pod="openstack/dnsmasq-dns-57d769cc4f-455hk" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.650734 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2x8v\" (UniqueName: \"kubernetes.io/projected/370e2ab9-a959-4009-8f5b-8973f062cf54-kube-api-access-c2x8v\") pod \"dnsmasq-dns-57d769cc4f-455hk\" (UID: \"370e2ab9-a959-4009-8f5b-8973f062cf54\") " pod="openstack/dnsmasq-dns-57d769cc4f-455hk" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.650794 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/370e2ab9-a959-4009-8f5b-8973f062cf54-config\") pod \"dnsmasq-dns-57d769cc4f-455hk\" (UID: \"370e2ab9-a959-4009-8f5b-8973f062cf54\") " pod="openstack/dnsmasq-dns-57d769cc4f-455hk" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.752894 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/370e2ab9-a959-4009-8f5b-8973f062cf54-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-455hk\" (UID: \"370e2ab9-a959-4009-8f5b-8973f062cf54\") " pod="openstack/dnsmasq-dns-57d769cc4f-455hk" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.752963 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2x8v\" (UniqueName: \"kubernetes.io/projected/370e2ab9-a959-4009-8f5b-8973f062cf54-kube-api-access-c2x8v\") pod \"dnsmasq-dns-57d769cc4f-455hk\" (UID: \"370e2ab9-a959-4009-8f5b-8973f062cf54\") " pod="openstack/dnsmasq-dns-57d769cc4f-455hk" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.754124 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/370e2ab9-a959-4009-8f5b-8973f062cf54-config\") pod \"dnsmasq-dns-57d769cc4f-455hk\" (UID: \"370e2ab9-a959-4009-8f5b-8973f062cf54\") " pod="openstack/dnsmasq-dns-57d769cc4f-455hk" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.762572 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/370e2ab9-a959-4009-8f5b-8973f062cf54-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-455hk\" (UID: \"370e2ab9-a959-4009-8f5b-8973f062cf54\") " pod="openstack/dnsmasq-dns-57d769cc4f-455hk" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.763589 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/370e2ab9-a959-4009-8f5b-8973f062cf54-config\") pod \"dnsmasq-dns-57d769cc4f-455hk\" (UID: \"370e2ab9-a959-4009-8f5b-8973f062cf54\") " pod="openstack/dnsmasq-dns-57d769cc4f-455hk" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.775770 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2x8v\" (UniqueName: \"kubernetes.io/projected/370e2ab9-a959-4009-8f5b-8973f062cf54-kube-api-access-c2x8v\") pod \"dnsmasq-dns-57d769cc4f-455hk\" (UID: \"370e2ab9-a959-4009-8f5b-8973f062cf54\") " pod="openstack/dnsmasq-dns-57d769cc4f-455hk" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.906844 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-455hk" Jan 28 09:59:19 crc kubenswrapper[4735]: I0128 09:59:19.977934 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6k6cm"] Jan 28 09:59:19 crc kubenswrapper[4735]: W0128 09:59:19.985326 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64a4167f_5e4f_4110_8a0f_6aeb34dd4034.slice/crio-97d903ca7e91c0a4c493b2547e0bd3f4b87fc00ce77a017f5008aa71c00c6c27 WatchSource:0}: Error finding container 97d903ca7e91c0a4c493b2547e0bd3f4b87fc00ce77a017f5008aa71c00c6c27: Status 404 returned error can't find the container with id 97d903ca7e91c0a4c493b2547e0bd3f4b87fc00ce77a017f5008aa71c00c6c27 Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.362490 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-455hk"] Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.370416 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.371923 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.374703 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.376316 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zfw4x" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.377216 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.379032 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.379992 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.380548 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.380597 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.391295 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.465633 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.465698 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.465763 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-server-conf\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.465794 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.465832 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-config-data\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.465865 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.465893 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.465913 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4jxk\" (UniqueName: \"kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-kube-api-access-x4jxk\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.465965 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.465991 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/348630cf-4549-4b7a-8784-9b1640f64c39-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.466014 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/348630cf-4549-4b7a-8784-9b1640f64c39-pod-info\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.567352 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-config-data\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.567812 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.567845 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.567872 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4jxk\" (UniqueName: \"kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-kube-api-access-x4jxk\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.567923 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.567947 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/348630cf-4549-4b7a-8784-9b1640f64c39-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.567972 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/348630cf-4549-4b7a-8784-9b1640f64c39-pod-info\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.568000 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.568031 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.568076 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-server-conf\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.568111 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.568373 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-config-data\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.568500 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.568854 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.569298 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.569866 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.569883 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-server-conf\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.575263 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.589127 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/348630cf-4549-4b7a-8784-9b1640f64c39-pod-info\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.590571 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.590585 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/348630cf-4549-4b7a-8784-9b1640f64c39-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.597143 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4jxk\" (UniqueName: \"kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-kube-api-access-x4jxk\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.618023 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.673323 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.674494 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.676333 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.676566 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.676670 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.676893 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.677006 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9j4gv" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.677122 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.678199 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.687322 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.714052 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.771057 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.771136 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.771173 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbzvf\" (UniqueName: \"kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-kube-api-access-lbzvf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.771381 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.771423 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.771474 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.771498 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.771517 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.771590 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.771830 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.771868 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.873042 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.873085 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.873112 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbzvf\" (UniqueName: \"kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-kube-api-access-lbzvf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.873173 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.873192 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.873237 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.873258 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.873286 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.873315 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.873365 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.873382 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.873590 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.874312 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.874346 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.874834 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.875016 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.875164 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.877468 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.880911 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.882419 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.891821 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.893326 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbzvf\" (UniqueName: \"kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-kube-api-access-lbzvf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.899518 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:20 crc kubenswrapper[4735]: I0128 09:59:20.943894 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6k6cm" event={"ID":"64a4167f-5e4f-4110-8a0f-6aeb34dd4034","Type":"ContainerStarted","Data":"97d903ca7e91c0a4c493b2547e0bd3f4b87fc00ce77a017f5008aa71c00c6c27"} Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.010131 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.761813 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.765724 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.772641 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.774856 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.776298 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-gkhgs" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.782433 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.782553 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.788300 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.888934 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c58a2fe7-0339-42bb-96f9-084e2b9a3182-config-data-default\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.888973 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blf9r\" (UniqueName: \"kubernetes.io/projected/c58a2fe7-0339-42bb-96f9-084e2b9a3182-kube-api-access-blf9r\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.888994 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58a2fe7-0339-42bb-96f9-084e2b9a3182-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.889024 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.889051 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c58a2fe7-0339-42bb-96f9-084e2b9a3182-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.889073 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c58a2fe7-0339-42bb-96f9-084e2b9a3182-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.889099 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c58a2fe7-0339-42bb-96f9-084e2b9a3182-kolla-config\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.889576 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c58a2fe7-0339-42bb-96f9-084e2b9a3182-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.992364 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c58a2fe7-0339-42bb-96f9-084e2b9a3182-config-data-default\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.992418 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blf9r\" (UniqueName: \"kubernetes.io/projected/c58a2fe7-0339-42bb-96f9-084e2b9a3182-kube-api-access-blf9r\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.992443 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58a2fe7-0339-42bb-96f9-084e2b9a3182-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.992486 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.992523 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c58a2fe7-0339-42bb-96f9-084e2b9a3182-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.992552 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c58a2fe7-0339-42bb-96f9-084e2b9a3182-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.992592 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c58a2fe7-0339-42bb-96f9-084e2b9a3182-kolla-config\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.992615 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c58a2fe7-0339-42bb-96f9-084e2b9a3182-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.994134 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.996246 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c58a2fe7-0339-42bb-96f9-084e2b9a3182-kolla-config\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.996877 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c58a2fe7-0339-42bb-96f9-084e2b9a3182-config-data-default\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.997038 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c58a2fe7-0339-42bb-96f9-084e2b9a3182-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.997110 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c58a2fe7-0339-42bb-96f9-084e2b9a3182-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.997999 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c58a2fe7-0339-42bb-96f9-084e2b9a3182-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:21 crc kubenswrapper[4735]: I0128 09:59:21.998676 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58a2fe7-0339-42bb-96f9-084e2b9a3182-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:22 crc kubenswrapper[4735]: I0128 09:59:22.012091 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blf9r\" (UniqueName: \"kubernetes.io/projected/c58a2fe7-0339-42bb-96f9-084e2b9a3182-kube-api-access-blf9r\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:22 crc kubenswrapper[4735]: I0128 09:59:22.014467 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c58a2fe7-0339-42bb-96f9-084e2b9a3182\") " pod="openstack/openstack-galera-0" Jan 28 09:59:22 crc kubenswrapper[4735]: I0128 09:59:22.095851 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.223874 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.225259 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.227971 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.228196 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.228662 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-lqklc" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.232074 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.251981 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.312062 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/beb7a8ad-5283-42ff-85af-30eb94a8b291-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.312134 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.312163 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/beb7a8ad-5283-42ff-85af-30eb94a8b291-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.312190 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n57ls\" (UniqueName: \"kubernetes.io/projected/beb7a8ad-5283-42ff-85af-30eb94a8b291-kube-api-access-n57ls\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.312256 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/beb7a8ad-5283-42ff-85af-30eb94a8b291-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.312392 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/beb7a8ad-5283-42ff-85af-30eb94a8b291-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.312464 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/beb7a8ad-5283-42ff-85af-30eb94a8b291-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.312547 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beb7a8ad-5283-42ff-85af-30eb94a8b291-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.414403 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beb7a8ad-5283-42ff-85af-30eb94a8b291-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.414735 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/beb7a8ad-5283-42ff-85af-30eb94a8b291-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.414985 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.415003 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/beb7a8ad-5283-42ff-85af-30eb94a8b291-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.415021 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n57ls\" (UniqueName: \"kubernetes.io/projected/beb7a8ad-5283-42ff-85af-30eb94a8b291-kube-api-access-n57ls\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.415058 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/beb7a8ad-5283-42ff-85af-30eb94a8b291-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.415089 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/beb7a8ad-5283-42ff-85af-30eb94a8b291-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.415108 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/beb7a8ad-5283-42ff-85af-30eb94a8b291-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.415380 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.415622 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/beb7a8ad-5283-42ff-85af-30eb94a8b291-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.416575 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/beb7a8ad-5283-42ff-85af-30eb94a8b291-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.416589 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/beb7a8ad-5283-42ff-85af-30eb94a8b291-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.417129 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/beb7a8ad-5283-42ff-85af-30eb94a8b291-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.424680 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beb7a8ad-5283-42ff-85af-30eb94a8b291-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.425227 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/beb7a8ad-5283-42ff-85af-30eb94a8b291-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.441346 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.447008 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.447855 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.456358 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.456888 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-5dl6t" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.458015 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.464598 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n57ls\" (UniqueName: \"kubernetes.io/projected/beb7a8ad-5283-42ff-85af-30eb94a8b291-kube-api-access-n57ls\") pod \"openstack-cell1-galera-0\" (UID: \"beb7a8ad-5283-42ff-85af-30eb94a8b291\") " pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.468394 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.520538 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c478e32a-3e57-49eb-8bd0-cff353ad0f56-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c478e32a-3e57-49eb-8bd0-cff353ad0f56\") " pod="openstack/memcached-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.520596 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsmd2\" (UniqueName: \"kubernetes.io/projected/c478e32a-3e57-49eb-8bd0-cff353ad0f56-kube-api-access-hsmd2\") pod \"memcached-0\" (UID: \"c478e32a-3e57-49eb-8bd0-cff353ad0f56\") " pod="openstack/memcached-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.520669 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c478e32a-3e57-49eb-8bd0-cff353ad0f56-config-data\") pod \"memcached-0\" (UID: \"c478e32a-3e57-49eb-8bd0-cff353ad0f56\") " pod="openstack/memcached-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.520715 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c478e32a-3e57-49eb-8bd0-cff353ad0f56-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c478e32a-3e57-49eb-8bd0-cff353ad0f56\") " pod="openstack/memcached-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.520768 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c478e32a-3e57-49eb-8bd0-cff353ad0f56-kolla-config\") pod \"memcached-0\" (UID: \"c478e32a-3e57-49eb-8bd0-cff353ad0f56\") " pod="openstack/memcached-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.549159 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.622569 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c478e32a-3e57-49eb-8bd0-cff353ad0f56-config-data\") pod \"memcached-0\" (UID: \"c478e32a-3e57-49eb-8bd0-cff353ad0f56\") " pod="openstack/memcached-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.623850 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c478e32a-3e57-49eb-8bd0-cff353ad0f56-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c478e32a-3e57-49eb-8bd0-cff353ad0f56\") " pod="openstack/memcached-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.624017 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c478e32a-3e57-49eb-8bd0-cff353ad0f56-kolla-config\") pod \"memcached-0\" (UID: \"c478e32a-3e57-49eb-8bd0-cff353ad0f56\") " pod="openstack/memcached-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.624163 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c478e32a-3e57-49eb-8bd0-cff353ad0f56-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c478e32a-3e57-49eb-8bd0-cff353ad0f56\") " pod="openstack/memcached-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.624284 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsmd2\" (UniqueName: \"kubernetes.io/projected/c478e32a-3e57-49eb-8bd0-cff353ad0f56-kube-api-access-hsmd2\") pod \"memcached-0\" (UID: \"c478e32a-3e57-49eb-8bd0-cff353ad0f56\") " pod="openstack/memcached-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.623761 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c478e32a-3e57-49eb-8bd0-cff353ad0f56-config-data\") pod \"memcached-0\" (UID: \"c478e32a-3e57-49eb-8bd0-cff353ad0f56\") " pod="openstack/memcached-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.625357 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c478e32a-3e57-49eb-8bd0-cff353ad0f56-kolla-config\") pod \"memcached-0\" (UID: \"c478e32a-3e57-49eb-8bd0-cff353ad0f56\") " pod="openstack/memcached-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.631375 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c478e32a-3e57-49eb-8bd0-cff353ad0f56-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c478e32a-3e57-49eb-8bd0-cff353ad0f56\") " pod="openstack/memcached-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.647447 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c478e32a-3e57-49eb-8bd0-cff353ad0f56-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c478e32a-3e57-49eb-8bd0-cff353ad0f56\") " pod="openstack/memcached-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.660860 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsmd2\" (UniqueName: \"kubernetes.io/projected/c478e32a-3e57-49eb-8bd0-cff353ad0f56-kube-api-access-hsmd2\") pod \"memcached-0\" (UID: \"c478e32a-3e57-49eb-8bd0-cff353ad0f56\") " pod="openstack/memcached-0" Jan 28 09:59:23 crc kubenswrapper[4735]: I0128 09:59:23.832824 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 09:59:24 crc kubenswrapper[4735]: W0128 09:59:24.252391 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod370e2ab9_a959_4009_8f5b_8973f062cf54.slice/crio-c3e73cd69e36bccb8cd4049538a8016004497cb2d5281efd71737a15c4336379 WatchSource:0}: Error finding container c3e73cd69e36bccb8cd4049538a8016004497cb2d5281efd71737a15c4336379: Status 404 returned error can't find the container with id c3e73cd69e36bccb8cd4049538a8016004497cb2d5281efd71737a15c4336379 Jan 28 09:59:24 crc kubenswrapper[4735]: I0128 09:59:24.264669 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 09:59:25 crc kubenswrapper[4735]: I0128 09:59:25.001876 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-455hk" event={"ID":"370e2ab9-a959-4009-8f5b-8973f062cf54","Type":"ContainerStarted","Data":"c3e73cd69e36bccb8cd4049538a8016004497cb2d5281efd71737a15c4336379"} Jan 28 09:59:25 crc kubenswrapper[4735]: I0128 09:59:25.435281 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 09:59:25 crc kubenswrapper[4735]: I0128 09:59:25.436190 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 09:59:25 crc kubenswrapper[4735]: I0128 09:59:25.438107 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-hkjpx" Jan 28 09:59:25 crc kubenswrapper[4735]: I0128 09:59:25.450271 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 09:59:25 crc kubenswrapper[4735]: I0128 09:59:25.558665 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql8mv\" (UniqueName: \"kubernetes.io/projected/4af8f981-a165-4485-a3da-f6c0dfaecb00-kube-api-access-ql8mv\") pod \"kube-state-metrics-0\" (UID: \"4af8f981-a165-4485-a3da-f6c0dfaecb00\") " pod="openstack/kube-state-metrics-0" Jan 28 09:59:25 crc kubenswrapper[4735]: I0128 09:59:25.659800 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql8mv\" (UniqueName: \"kubernetes.io/projected/4af8f981-a165-4485-a3da-f6c0dfaecb00-kube-api-access-ql8mv\") pod \"kube-state-metrics-0\" (UID: \"4af8f981-a165-4485-a3da-f6c0dfaecb00\") " pod="openstack/kube-state-metrics-0" Jan 28 09:59:25 crc kubenswrapper[4735]: I0128 09:59:25.691587 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql8mv\" (UniqueName: \"kubernetes.io/projected/4af8f981-a165-4485-a3da-f6c0dfaecb00-kube-api-access-ql8mv\") pod \"kube-state-metrics-0\" (UID: \"4af8f981-a165-4485-a3da-f6c0dfaecb00\") " pod="openstack/kube-state-metrics-0" Jan 28 09:59:25 crc kubenswrapper[4735]: I0128 09:59:25.755757 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 09:59:27 crc kubenswrapper[4735]: I0128 09:59:27.140990 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.721187 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rdcgm"] Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.722322 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.724334 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-bbvpb" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.724950 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.725739 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.739557 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rdcgm"] Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.773648 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-lh9fz"] Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.775627 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.805889 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lh9fz"] Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.811455 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c583b0-22fd-4ccf-907a-1bb119119830-combined-ca-bundle\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.811557 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21c583b0-22fd-4ccf-907a-1bb119119830-var-run-ovn\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.811593 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21c583b0-22fd-4ccf-907a-1bb119119830-var-log-ovn\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.811617 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21c583b0-22fd-4ccf-907a-1bb119119830-scripts\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.811791 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21c583b0-22fd-4ccf-907a-1bb119119830-var-run\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.811851 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c583b0-22fd-4ccf-907a-1bb119119830-ovn-controller-tls-certs\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.811880 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2tf7\" (UniqueName: \"kubernetes.io/projected/21c583b0-22fd-4ccf-907a-1bb119119830-kube-api-access-p2tf7\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.913180 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21c583b0-22fd-4ccf-907a-1bb119119830-var-run-ovn\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.913458 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21c583b0-22fd-4ccf-907a-1bb119119830-var-log-ovn\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.913547 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21c583b0-22fd-4ccf-907a-1bb119119830-scripts\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.913648 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7a32b8d-eace-4a5f-b187-baf0e7d67479-scripts\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.913731 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f7a32b8d-eace-4a5f-b187-baf0e7d67479-etc-ovs\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.913858 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21c583b0-22fd-4ccf-907a-1bb119119830-var-run\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.913942 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2tf7\" (UniqueName: \"kubernetes.io/projected/21c583b0-22fd-4ccf-907a-1bb119119830-kube-api-access-p2tf7\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.914024 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c583b0-22fd-4ccf-907a-1bb119119830-ovn-controller-tls-certs\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.914123 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7a32b8d-eace-4a5f-b187-baf0e7d67479-var-log\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.914191 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c583b0-22fd-4ccf-907a-1bb119119830-combined-ca-bundle\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.914272 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f7a32b8d-eace-4a5f-b187-baf0e7d67479-var-lib\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.914356 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f7a32b8d-eace-4a5f-b187-baf0e7d67479-var-run\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.914434 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkgxr\" (UniqueName: \"kubernetes.io/projected/f7a32b8d-eace-4a5f-b187-baf0e7d67479-kube-api-access-vkgxr\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.913775 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21c583b0-22fd-4ccf-907a-1bb119119830-var-run-ovn\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.914781 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21c583b0-22fd-4ccf-907a-1bb119119830-var-run\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.915740 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21c583b0-22fd-4ccf-907a-1bb119119830-var-log-ovn\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.915948 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21c583b0-22fd-4ccf-907a-1bb119119830-scripts\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.928306 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/21c583b0-22fd-4ccf-907a-1bb119119830-ovn-controller-tls-certs\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.928396 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c583b0-22fd-4ccf-907a-1bb119119830-combined-ca-bundle\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:28 crc kubenswrapper[4735]: I0128 09:59:28.932194 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2tf7\" (UniqueName: \"kubernetes.io/projected/21c583b0-22fd-4ccf-907a-1bb119119830-kube-api-access-p2tf7\") pod \"ovn-controller-rdcgm\" (UID: \"21c583b0-22fd-4ccf-907a-1bb119119830\") " pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.016122 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7a32b8d-eace-4a5f-b187-baf0e7d67479-scripts\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.016465 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f7a32b8d-eace-4a5f-b187-baf0e7d67479-etc-ovs\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.016793 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f7a32b8d-eace-4a5f-b187-baf0e7d67479-etc-ovs\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.017124 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7a32b8d-eace-4a5f-b187-baf0e7d67479-var-log\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.017245 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f7a32b8d-eace-4a5f-b187-baf0e7d67479-var-lib\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.017345 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f7a32b8d-eace-4a5f-b187-baf0e7d67479-var-run\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.017429 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f7a32b8d-eace-4a5f-b187-baf0e7d67479-var-lib\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.017406 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f7a32b8d-eace-4a5f-b187-baf0e7d67479-var-run\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.017434 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkgxr\" (UniqueName: \"kubernetes.io/projected/f7a32b8d-eace-4a5f-b187-baf0e7d67479-kube-api-access-vkgxr\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.017346 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7a32b8d-eace-4a5f-b187-baf0e7d67479-var-log\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.018067 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7a32b8d-eace-4a5f-b187-baf0e7d67479-scripts\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.040479 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkgxr\" (UniqueName: \"kubernetes.io/projected/f7a32b8d-eace-4a5f-b187-baf0e7d67479-kube-api-access-vkgxr\") pod \"ovn-controller-ovs-lh9fz\" (UID: \"f7a32b8d-eace-4a5f-b187-baf0e7d67479\") " pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.088621 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.103474 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.641934 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.643308 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.646405 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.646788 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-2hft4" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.647023 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.647140 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.647043 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.680983 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.738968 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r29mk\" (UniqueName: \"kubernetes.io/projected/4362d077-82db-43b3-8df1-b60d2d34028c-kube-api-access-r29mk\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.739036 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4362d077-82db-43b3-8df1-b60d2d34028c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.739066 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4362d077-82db-43b3-8df1-b60d2d34028c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.739101 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4362d077-82db-43b3-8df1-b60d2d34028c-config\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.739120 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4362d077-82db-43b3-8df1-b60d2d34028c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.739152 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4362d077-82db-43b3-8df1-b60d2d34028c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.739179 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4362d077-82db-43b3-8df1-b60d2d34028c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.739338 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.841094 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4362d077-82db-43b3-8df1-b60d2d34028c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.841158 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4362d077-82db-43b3-8df1-b60d2d34028c-config\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.841185 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4362d077-82db-43b3-8df1-b60d2d34028c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.841214 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4362d077-82db-43b3-8df1-b60d2d34028c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.841243 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4362d077-82db-43b3-8df1-b60d2d34028c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.841315 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.841359 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r29mk\" (UniqueName: \"kubernetes.io/projected/4362d077-82db-43b3-8df1-b60d2d34028c-kube-api-access-r29mk\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.841399 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4362d077-82db-43b3-8df1-b60d2d34028c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.841957 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4362d077-82db-43b3-8df1-b60d2d34028c-config\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.842203 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.842373 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4362d077-82db-43b3-8df1-b60d2d34028c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.843129 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4362d077-82db-43b3-8df1-b60d2d34028c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.844785 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4362d077-82db-43b3-8df1-b60d2d34028c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.845928 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4362d077-82db-43b3-8df1-b60d2d34028c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.855791 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4362d077-82db-43b3-8df1-b60d2d34028c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.858848 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r29mk\" (UniqueName: \"kubernetes.io/projected/4362d077-82db-43b3-8df1-b60d2d34028c-kube-api-access-r29mk\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.866341 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4362d077-82db-43b3-8df1-b60d2d34028c\") " pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:29 crc kubenswrapper[4735]: I0128 09:59:29.971941 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:32 crc kubenswrapper[4735]: I0128 09:59:32.054512 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"beb7a8ad-5283-42ff-85af-30eb94a8b291","Type":"ContainerStarted","Data":"2f7de7ee465f0dd73761619ba2cc74ec717e172745ddc4acd22763236e2c8a56"} Jan 28 09:59:32 crc kubenswrapper[4735]: I0128 09:59:32.292038 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 09:59:32 crc kubenswrapper[4735]: I0128 09:59:32.325991 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 09:59:32 crc kubenswrapper[4735]: W0128 09:59:32.932815 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb11b48d_f557_4ee4_aa1a_c23e68f65fd1.slice/crio-e9f7a0b45360c6f1a0e991bda107429d76aa13104521ed59d2905afb685b91f1 WatchSource:0}: Error finding container e9f7a0b45360c6f1a0e991bda107429d76aa13104521ed59d2905afb685b91f1: Status 404 returned error can't find the container with id e9f7a0b45360c6f1a0e991bda107429d76aa13104521ed59d2905afb685b91f1 Jan 28 09:59:32 crc kubenswrapper[4735]: W0128 09:59:32.933303 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc58a2fe7_0339_42bb_96f9_084e2b9a3182.slice/crio-a8ed4e5c90a8e2f4a12654d582d3ae52799a298f24e1f7020606ba667a2ca554 WatchSource:0}: Error finding container a8ed4e5c90a8e2f4a12654d582d3ae52799a298f24e1f7020606ba667a2ca554: Status 404 returned error can't find the container with id a8ed4e5c90a8e2f4a12654d582d3ae52799a298f24e1f7020606ba667a2ca554 Jan 28 09:59:32 crc kubenswrapper[4735]: E0128 09:59:32.953497 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 09:59:32 crc kubenswrapper[4735]: E0128 09:59:32.953744 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ss8fd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-htdnd_openstack(125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 09:59:32 crc kubenswrapper[4735]: E0128 09:59:32.955013 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-htdnd" podUID="125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.068877 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c58a2fe7-0339-42bb-96f9-084e2b9a3182","Type":"ContainerStarted","Data":"a8ed4e5c90a8e2f4a12654d582d3ae52799a298f24e1f7020606ba667a2ca554"} Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.070217 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1","Type":"ContainerStarted","Data":"e9f7a0b45360c6f1a0e991bda107429d76aa13104521ed59d2905afb685b91f1"} Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.138306 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.139666 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.143336 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.146498 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.146700 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-cwmgm" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.150038 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.163563 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.210313 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc607c73-caa4-4986-90b6-1b101b32970b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.210355 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc607c73-caa4-4986-90b6-1b101b32970b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.210414 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc607c73-caa4-4986-90b6-1b101b32970b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.210487 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc607c73-caa4-4986-90b6-1b101b32970b-config\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.210608 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dc607c73-caa4-4986-90b6-1b101b32970b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.210657 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vm5l\" (UniqueName: \"kubernetes.io/projected/dc607c73-caa4-4986-90b6-1b101b32970b-kube-api-access-7vm5l\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.210730 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc607c73-caa4-4986-90b6-1b101b32970b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.210853 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.312994 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dc607c73-caa4-4986-90b6-1b101b32970b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.313043 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vm5l\" (UniqueName: \"kubernetes.io/projected/dc607c73-caa4-4986-90b6-1b101b32970b-kube-api-access-7vm5l\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.313081 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc607c73-caa4-4986-90b6-1b101b32970b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.313118 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.313168 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc607c73-caa4-4986-90b6-1b101b32970b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.313187 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc607c73-caa4-4986-90b6-1b101b32970b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.313206 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc607c73-caa4-4986-90b6-1b101b32970b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.313223 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc607c73-caa4-4986-90b6-1b101b32970b-config\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.313566 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.314523 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc607c73-caa4-4986-90b6-1b101b32970b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.314553 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc607c73-caa4-4986-90b6-1b101b32970b-config\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.314865 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dc607c73-caa4-4986-90b6-1b101b32970b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.319973 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc607c73-caa4-4986-90b6-1b101b32970b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.320879 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc607c73-caa4-4986-90b6-1b101b32970b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.327070 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc607c73-caa4-4986-90b6-1b101b32970b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.335488 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vm5l\" (UniqueName: \"kubernetes.io/projected/dc607c73-caa4-4986-90b6-1b101b32970b-kube-api-access-7vm5l\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.336192 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"dc607c73-caa4-4986-90b6-1b101b32970b\") " pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.478375 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.498259 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 09:59:33 crc kubenswrapper[4735]: W0128 09:59:33.513342 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc478e32a_3e57_49eb_8bd0_cff353ad0f56.slice/crio-c0758fd4a71cb95568343bb8c64065a8f8b8011504233f480a51ba400eec0404 WatchSource:0}: Error finding container c0758fd4a71cb95568343bb8c64065a8f8b8011504233f480a51ba400eec0404: Status 404 returned error can't find the container with id c0758fd4a71cb95568343bb8c64065a8f8b8011504233f480a51ba400eec0404 Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.589441 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-htdnd" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.604668 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 09:59:33 crc kubenswrapper[4735]: W0128 09:59:33.681923 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4af8f981_a165_4485_a3da_f6c0dfaecb00.slice/crio-3789d7fa64691a8a00c301f0ed8dd1d118526c8ad980bd58b845c9c8222c8746 WatchSource:0}: Error finding container 3789d7fa64691a8a00c301f0ed8dd1d118526c8ad980bd58b845c9c8222c8746: Status 404 returned error can't find the container with id 3789d7fa64691a8a00c301f0ed8dd1d118526c8ad980bd58b845c9c8222c8746 Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.687079 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.708663 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 09:59:33 crc kubenswrapper[4735]: W0128 09:59:33.713151 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7a32b8d_eace_4a5f_b187_baf0e7d67479.slice/crio-db7ab7b3523dabdf99f3e0b864230679e2af1183aa6b9c86c6d6a4709cf0f153 WatchSource:0}: Error finding container db7ab7b3523dabdf99f3e0b864230679e2af1183aa6b9c86c6d6a4709cf0f153: Status 404 returned error can't find the container with id db7ab7b3523dabdf99f3e0b864230679e2af1183aa6b9c86c6d6a4709cf0f153 Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.715206 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rdcgm"] Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.720029 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-dns-svc\") pod \"125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4\" (UID: \"125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4\") " Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.720163 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-config\") pod \"125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4\" (UID: \"125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4\") " Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.720226 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss8fd\" (UniqueName: \"kubernetes.io/projected/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-kube-api-access-ss8fd\") pod \"125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4\" (UID: \"125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4\") " Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.721408 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lh9fz"] Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.722044 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4" (UID: "125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.722063 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-config" (OuterVolumeSpecName: "config") pod "125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4" (UID: "125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.725836 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-kube-api-access-ss8fd" (OuterVolumeSpecName: "kube-api-access-ss8fd") pod "125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4" (UID: "125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4"). InnerVolumeSpecName "kube-api-access-ss8fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.822143 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.822176 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss8fd\" (UniqueName: \"kubernetes.io/projected/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-kube-api-access-ss8fd\") on node \"crc\" DevicePath \"\"" Jan 28 09:59:33 crc kubenswrapper[4735]: I0128 09:59:33.822185 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.077948 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rdcgm" event={"ID":"21c583b0-22fd-4ccf-907a-1bb119119830","Type":"ContainerStarted","Data":"cc8756b19d4486c436238e0c8af85f16b32a3e9155e4cf5db70351fb6d2d6d2f"} Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.080186 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lh9fz" event={"ID":"f7a32b8d-eace-4a5f-b187-baf0e7d67479","Type":"ContainerStarted","Data":"db7ab7b3523dabdf99f3e0b864230679e2af1183aa6b9c86c6d6a4709cf0f153"} Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.081423 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"348630cf-4549-4b7a-8784-9b1640f64c39","Type":"ContainerStarted","Data":"3839075998c0b24ff149998d883d9d9d471742b2ebaa79cb0e0f90e76ed3391c"} Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.082317 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4af8f981-a165-4485-a3da-f6c0dfaecb00","Type":"ContainerStarted","Data":"3789d7fa64691a8a00c301f0ed8dd1d118526c8ad980bd58b845c9c8222c8746"} Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.083481 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-htdnd" event={"ID":"125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4","Type":"ContainerDied","Data":"314486b7957175bc441012dfcafd4d4421d9808e875bb1b0ddca60d92259af25"} Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.083551 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-htdnd" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.085591 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c478e32a-3e57-49eb-8bd0-cff353ad0f56","Type":"ContainerStarted","Data":"c0758fd4a71cb95568343bb8c64065a8f8b8011504233f480a51ba400eec0404"} Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.086535 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4362d077-82db-43b3-8df1-b60d2d34028c","Type":"ContainerStarted","Data":"b90ee8489976f847544ee66f8cae9d7460d9fd75d91efdfee21c7d818ce05d7b"} Jan 28 09:59:34 crc kubenswrapper[4735]: W0128 09:59:34.088936 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc607c73_caa4_4986_90b6_1b101b32970b.slice/crio-69f60bd5256694509ee111869d8415abbce431d2c941d1da79ae65569c96c1bb WatchSource:0}: Error finding container 69f60bd5256694509ee111869d8415abbce431d2c941d1da79ae65569c96c1bb: Status 404 returned error can't find the container with id 69f60bd5256694509ee111869d8415abbce431d2c941d1da79ae65569c96c1bb Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.090822 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.152485 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-htdnd"] Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.157000 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-htdnd"] Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.206214 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-g77s5"] Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.207292 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.210180 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.227066 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-g77s5"] Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.330848 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d10ea80e-de8b-48b7-a31c-5e8941549930-combined-ca-bundle\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.330898 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tgk5\" (UniqueName: \"kubernetes.io/projected/d10ea80e-de8b-48b7-a31c-5e8941549930-kube-api-access-9tgk5\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.330929 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d10ea80e-de8b-48b7-a31c-5e8941549930-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.331068 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d10ea80e-de8b-48b7-a31c-5e8941549930-ovn-rundir\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.331142 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d10ea80e-de8b-48b7-a31c-5e8941549930-ovs-rundir\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.331295 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d10ea80e-de8b-48b7-a31c-5e8941549930-config\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.360693 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-455hk"] Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.384768 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-trzfc"] Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.387906 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.389720 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.398090 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-trzfc"] Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.433087 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d10ea80e-de8b-48b7-a31c-5e8941549930-ovs-rundir\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.433159 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d10ea80e-de8b-48b7-a31c-5e8941549930-config\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.433195 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzrdg\" (UniqueName: \"kubernetes.io/projected/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-kube-api-access-kzrdg\") pod \"dnsmasq-dns-7fd796d7df-trzfc\" (UID: \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\") " pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.433253 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-config\") pod \"dnsmasq-dns-7fd796d7df-trzfc\" (UID: \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\") " pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.433315 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d10ea80e-de8b-48b7-a31c-5e8941549930-combined-ca-bundle\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.433340 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tgk5\" (UniqueName: \"kubernetes.io/projected/d10ea80e-de8b-48b7-a31c-5e8941549930-kube-api-access-9tgk5\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.433364 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d10ea80e-de8b-48b7-a31c-5e8941549930-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.433421 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-trzfc\" (UID: \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\") " pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.433450 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d10ea80e-de8b-48b7-a31c-5e8941549930-ovn-rundir\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.433498 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-trzfc\" (UID: \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\") " pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.433855 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d10ea80e-de8b-48b7-a31c-5e8941549930-ovs-rundir\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.434215 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d10ea80e-de8b-48b7-a31c-5e8941549930-ovn-rundir\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.434498 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d10ea80e-de8b-48b7-a31c-5e8941549930-config\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.440952 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d10ea80e-de8b-48b7-a31c-5e8941549930-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.442565 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d10ea80e-de8b-48b7-a31c-5e8941549930-combined-ca-bundle\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.450108 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tgk5\" (UniqueName: \"kubernetes.io/projected/d10ea80e-de8b-48b7-a31c-5e8941549930-kube-api-access-9tgk5\") pod \"ovn-controller-metrics-g77s5\" (UID: \"d10ea80e-de8b-48b7-a31c-5e8941549930\") " pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.534719 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-trzfc\" (UID: \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\") " pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.534797 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-trzfc\" (UID: \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\") " pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.534841 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzrdg\" (UniqueName: \"kubernetes.io/projected/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-kube-api-access-kzrdg\") pod \"dnsmasq-dns-7fd796d7df-trzfc\" (UID: \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\") " pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.534881 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-config\") pod \"dnsmasq-dns-7fd796d7df-trzfc\" (UID: \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\") " pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.535702 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-config\") pod \"dnsmasq-dns-7fd796d7df-trzfc\" (UID: \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\") " pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.536317 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-trzfc\" (UID: \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\") " pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.537392 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-trzfc\" (UID: \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\") " pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.537560 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-g77s5" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.556029 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzrdg\" (UniqueName: \"kubernetes.io/projected/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-kube-api-access-kzrdg\") pod \"dnsmasq-dns-7fd796d7df-trzfc\" (UID: \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\") " pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.562153 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6k6cm"] Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.608413 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6jwt4"] Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.618662 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.623377 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.628179 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6jwt4"] Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.708061 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.737885 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6jwt4\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.737970 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6jwt4\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.738040 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsqq7\" (UniqueName: \"kubernetes.io/projected/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-kube-api-access-nsqq7\") pod \"dnsmasq-dns-86db49b7ff-6jwt4\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.738091 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-config\") pod \"dnsmasq-dns-86db49b7ff-6jwt4\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.738248 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6jwt4\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.846954 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6jwt4\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.847043 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsqq7\" (UniqueName: \"kubernetes.io/projected/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-kube-api-access-nsqq7\") pod \"dnsmasq-dns-86db49b7ff-6jwt4\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.847084 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-config\") pod \"dnsmasq-dns-86db49b7ff-6jwt4\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.847279 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6jwt4\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.847363 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6jwt4\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.848546 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6jwt4\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.849193 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6jwt4\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.849238 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6jwt4\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.849650 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-config\") pod \"dnsmasq-dns-86db49b7ff-6jwt4\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.867462 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsqq7\" (UniqueName: \"kubernetes.io/projected/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-kube-api-access-nsqq7\") pod \"dnsmasq-dns-86db49b7ff-6jwt4\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:34 crc kubenswrapper[4735]: I0128 09:59:34.958453 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:35 crc kubenswrapper[4735]: I0128 09:59:35.085824 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-g77s5"] Jan 28 09:59:35 crc kubenswrapper[4735]: I0128 09:59:35.096628 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"dc607c73-caa4-4986-90b6-1b101b32970b","Type":"ContainerStarted","Data":"69f60bd5256694509ee111869d8415abbce431d2c941d1da79ae65569c96c1bb"} Jan 28 09:59:35 crc kubenswrapper[4735]: I0128 09:59:35.153566 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-trzfc"] Jan 28 09:59:35 crc kubenswrapper[4735]: W0128 09:59:35.168185 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf34ef3a_7fb5_4f63_8ff4_f57c5410a511.slice/crio-e0032e05b66edf528a6c523e0425d5c480242b727e8f5451627af64b027115dc WatchSource:0}: Error finding container e0032e05b66edf528a6c523e0425d5c480242b727e8f5451627af64b027115dc: Status 404 returned error can't find the container with id e0032e05b66edf528a6c523e0425d5c480242b727e8f5451627af64b027115dc Jan 28 09:59:35 crc kubenswrapper[4735]: I0128 09:59:35.525187 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4" path="/var/lib/kubelet/pods/125d5183-f2c8-4c26-bc4d-cc6c0b22e8a4/volumes" Jan 28 09:59:35 crc kubenswrapper[4735]: W0128 09:59:35.545671 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde9e100a_9fe1_49ac_82b2_cbf314f57ca2.slice/crio-9955e0a8b1a7729d910d24cb1283d949e826afe6b233f3a69d6018c462de8ab4 WatchSource:0}: Error finding container 9955e0a8b1a7729d910d24cb1283d949e826afe6b233f3a69d6018c462de8ab4: Status 404 returned error can't find the container with id 9955e0a8b1a7729d910d24cb1283d949e826afe6b233f3a69d6018c462de8ab4 Jan 28 09:59:35 crc kubenswrapper[4735]: I0128 09:59:35.550518 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6jwt4"] Jan 28 09:59:36 crc kubenswrapper[4735]: I0128 09:59:36.103499 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-g77s5" event={"ID":"d10ea80e-de8b-48b7-a31c-5e8941549930","Type":"ContainerStarted","Data":"955cd61fcfde87afa26971b4bd02a5377a6c9ff2766b5e87d53948837078129f"} Jan 28 09:59:36 crc kubenswrapper[4735]: I0128 09:59:36.104259 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" event={"ID":"af34ef3a-7fb5-4f63-8ff4-f57c5410a511","Type":"ContainerStarted","Data":"e0032e05b66edf528a6c523e0425d5c480242b727e8f5451627af64b027115dc"} Jan 28 09:59:36 crc kubenswrapper[4735]: I0128 09:59:36.104836 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" event={"ID":"de9e100a-9fe1-49ac-82b2-cbf314f57ca2","Type":"ContainerStarted","Data":"9955e0a8b1a7729d910d24cb1283d949e826afe6b233f3a69d6018c462de8ab4"} Jan 28 09:59:36 crc kubenswrapper[4735]: E0128 09:59:36.169154 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 09:59:36 crc kubenswrapper[4735]: E0128 09:59:36.169342 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7tnql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-6st6d_openstack(9139575f-a07b-4c92-971d-9c0cb234c43d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 09:59:36 crc kubenswrapper[4735]: E0128 09:59:36.170555 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-6st6d" podUID="9139575f-a07b-4c92-971d-9c0cb234c43d" Jan 28 09:59:37 crc kubenswrapper[4735]: I0128 09:59:37.114888 4735 generic.go:334] "Generic (PLEG): container finished" podID="af34ef3a-7fb5-4f63-8ff4-f57c5410a511" containerID="7fc18085f2458e82a6138656022a95f89db09bce605ca2dfe981339275dcd3af" exitCode=0 Jan 28 09:59:37 crc kubenswrapper[4735]: I0128 09:59:37.115230 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" event={"ID":"af34ef3a-7fb5-4f63-8ff4-f57c5410a511","Type":"ContainerDied","Data":"7fc18085f2458e82a6138656022a95f89db09bce605ca2dfe981339275dcd3af"} Jan 28 09:59:37 crc kubenswrapper[4735]: I0128 09:59:37.119044 4735 generic.go:334] "Generic (PLEG): container finished" podID="64a4167f-5e4f-4110-8a0f-6aeb34dd4034" containerID="22cfdb56f8ec3e4e521690a9202091987ed8bf8a47ba23e2ea25ccd2e53ef809" exitCode=0 Jan 28 09:59:37 crc kubenswrapper[4735]: I0128 09:59:37.119100 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6k6cm" event={"ID":"64a4167f-5e4f-4110-8a0f-6aeb34dd4034","Type":"ContainerDied","Data":"22cfdb56f8ec3e4e521690a9202091987ed8bf8a47ba23e2ea25ccd2e53ef809"} Jan 28 09:59:37 crc kubenswrapper[4735]: I0128 09:59:37.121180 4735 generic.go:334] "Generic (PLEG): container finished" podID="de9e100a-9fe1-49ac-82b2-cbf314f57ca2" containerID="11266e2bf4d65e1d94a2cea23251c9c2117beed20d19551b9de2bcf292f83211" exitCode=0 Jan 28 09:59:37 crc kubenswrapper[4735]: I0128 09:59:37.121271 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" event={"ID":"de9e100a-9fe1-49ac-82b2-cbf314f57ca2","Type":"ContainerDied","Data":"11266e2bf4d65e1d94a2cea23251c9c2117beed20d19551b9de2bcf292f83211"} Jan 28 09:59:37 crc kubenswrapper[4735]: I0128 09:59:37.124478 4735 generic.go:334] "Generic (PLEG): container finished" podID="370e2ab9-a959-4009-8f5b-8973f062cf54" containerID="d3badcdd2427fe5ba0ca4312b978810dc5785966c2898944fd4db62ba17c1b08" exitCode=0 Jan 28 09:59:37 crc kubenswrapper[4735]: I0128 09:59:37.124577 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-455hk" event={"ID":"370e2ab9-a959-4009-8f5b-8973f062cf54","Type":"ContainerDied","Data":"d3badcdd2427fe5ba0ca4312b978810dc5785966c2898944fd4db62ba17c1b08"} Jan 28 09:59:40 crc kubenswrapper[4735]: I0128 09:59:40.830829 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-6k6cm" event={"ID":"64a4167f-5e4f-4110-8a0f-6aeb34dd4034","Type":"ContainerDied","Data":"97d903ca7e91c0a4c493b2547e0bd3f4b87fc00ce77a017f5008aa71c00c6c27"} Jan 28 09:59:40 crc kubenswrapper[4735]: I0128 09:59:40.831134 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97d903ca7e91c0a4c493b2547e0bd3f4b87fc00ce77a017f5008aa71c00c6c27" Jan 28 09:59:40 crc kubenswrapper[4735]: I0128 09:59:40.862015 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6k6cm" Jan 28 09:59:41 crc kubenswrapper[4735]: I0128 09:59:41.014817 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-config\") pod \"64a4167f-5e4f-4110-8a0f-6aeb34dd4034\" (UID: \"64a4167f-5e4f-4110-8a0f-6aeb34dd4034\") " Jan 28 09:59:41 crc kubenswrapper[4735]: I0128 09:59:41.014919 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-dns-svc\") pod \"64a4167f-5e4f-4110-8a0f-6aeb34dd4034\" (UID: \"64a4167f-5e4f-4110-8a0f-6aeb34dd4034\") " Jan 28 09:59:41 crc kubenswrapper[4735]: I0128 09:59:41.015041 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdp66\" (UniqueName: \"kubernetes.io/projected/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-kube-api-access-kdp66\") pod \"64a4167f-5e4f-4110-8a0f-6aeb34dd4034\" (UID: \"64a4167f-5e4f-4110-8a0f-6aeb34dd4034\") " Jan 28 09:59:41 crc kubenswrapper[4735]: I0128 09:59:41.020092 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-kube-api-access-kdp66" (OuterVolumeSpecName: "kube-api-access-kdp66") pod "64a4167f-5e4f-4110-8a0f-6aeb34dd4034" (UID: "64a4167f-5e4f-4110-8a0f-6aeb34dd4034"). InnerVolumeSpecName "kube-api-access-kdp66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:59:41 crc kubenswrapper[4735]: I0128 09:59:41.033651 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-config" (OuterVolumeSpecName: "config") pod "64a4167f-5e4f-4110-8a0f-6aeb34dd4034" (UID: "64a4167f-5e4f-4110-8a0f-6aeb34dd4034"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:59:41 crc kubenswrapper[4735]: I0128 09:59:41.034736 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64a4167f-5e4f-4110-8a0f-6aeb34dd4034" (UID: "64a4167f-5e4f-4110-8a0f-6aeb34dd4034"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:59:41 crc kubenswrapper[4735]: I0128 09:59:41.116863 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:59:41 crc kubenswrapper[4735]: I0128 09:59:41.116925 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 09:59:41 crc kubenswrapper[4735]: I0128 09:59:41.116947 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdp66\" (UniqueName: \"kubernetes.io/projected/64a4167f-5e4f-4110-8a0f-6aeb34dd4034-kube-api-access-kdp66\") on node \"crc\" DevicePath \"\"" Jan 28 09:59:41 crc kubenswrapper[4735]: I0128 09:59:41.850761 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-6k6cm" Jan 28 09:59:41 crc kubenswrapper[4735]: I0128 09:59:41.930935 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6k6cm"] Jan 28 09:59:41 crc kubenswrapper[4735]: I0128 09:59:41.940543 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-6k6cm"] Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.302944 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6st6d" Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.308553 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-455hk" Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.454380 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9139575f-a07b-4c92-971d-9c0cb234c43d-config\") pod \"9139575f-a07b-4c92-971d-9c0cb234c43d\" (UID: \"9139575f-a07b-4c92-971d-9c0cb234c43d\") " Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.454508 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tnql\" (UniqueName: \"kubernetes.io/projected/9139575f-a07b-4c92-971d-9c0cb234c43d-kube-api-access-7tnql\") pod \"9139575f-a07b-4c92-971d-9c0cb234c43d\" (UID: \"9139575f-a07b-4c92-971d-9c0cb234c43d\") " Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.454539 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/370e2ab9-a959-4009-8f5b-8973f062cf54-dns-svc\") pod \"370e2ab9-a959-4009-8f5b-8973f062cf54\" (UID: \"370e2ab9-a959-4009-8f5b-8973f062cf54\") " Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.454629 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/370e2ab9-a959-4009-8f5b-8973f062cf54-config\") pod \"370e2ab9-a959-4009-8f5b-8973f062cf54\" (UID: \"370e2ab9-a959-4009-8f5b-8973f062cf54\") " Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.454668 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2x8v\" (UniqueName: \"kubernetes.io/projected/370e2ab9-a959-4009-8f5b-8973f062cf54-kube-api-access-c2x8v\") pod \"370e2ab9-a959-4009-8f5b-8973f062cf54\" (UID: \"370e2ab9-a959-4009-8f5b-8973f062cf54\") " Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.455955 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9139575f-a07b-4c92-971d-9c0cb234c43d-config" (OuterVolumeSpecName: "config") pod "9139575f-a07b-4c92-971d-9c0cb234c43d" (UID: "9139575f-a07b-4c92-971d-9c0cb234c43d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.459932 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9139575f-a07b-4c92-971d-9c0cb234c43d-kube-api-access-7tnql" (OuterVolumeSpecName: "kube-api-access-7tnql") pod "9139575f-a07b-4c92-971d-9c0cb234c43d" (UID: "9139575f-a07b-4c92-971d-9c0cb234c43d"). InnerVolumeSpecName "kube-api-access-7tnql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.462001 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/370e2ab9-a959-4009-8f5b-8973f062cf54-kube-api-access-c2x8v" (OuterVolumeSpecName: "kube-api-access-c2x8v") pod "370e2ab9-a959-4009-8f5b-8973f062cf54" (UID: "370e2ab9-a959-4009-8f5b-8973f062cf54"). InnerVolumeSpecName "kube-api-access-c2x8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.473881 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/370e2ab9-a959-4009-8f5b-8973f062cf54-config" (OuterVolumeSpecName: "config") pod "370e2ab9-a959-4009-8f5b-8973f062cf54" (UID: "370e2ab9-a959-4009-8f5b-8973f062cf54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.474294 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/370e2ab9-a959-4009-8f5b-8973f062cf54-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "370e2ab9-a959-4009-8f5b-8973f062cf54" (UID: "370e2ab9-a959-4009-8f5b-8973f062cf54"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.513845 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64a4167f-5e4f-4110-8a0f-6aeb34dd4034" path="/var/lib/kubelet/pods/64a4167f-5e4f-4110-8a0f-6aeb34dd4034/volumes" Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.557286 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9139575f-a07b-4c92-971d-9c0cb234c43d-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.557396 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tnql\" (UniqueName: \"kubernetes.io/projected/9139575f-a07b-4c92-971d-9c0cb234c43d-kube-api-access-7tnql\") on node \"crc\" DevicePath \"\"" Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.557413 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/370e2ab9-a959-4009-8f5b-8973f062cf54-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.557424 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/370e2ab9-a959-4009-8f5b-8973f062cf54-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.557439 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2x8v\" (UniqueName: \"kubernetes.io/projected/370e2ab9-a959-4009-8f5b-8973f062cf54-kube-api-access-c2x8v\") on node \"crc\" DevicePath \"\"" Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.875523 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-455hk" event={"ID":"370e2ab9-a959-4009-8f5b-8973f062cf54","Type":"ContainerDied","Data":"c3e73cd69e36bccb8cd4049538a8016004497cb2d5281efd71737a15c4336379"} Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.875577 4735 scope.go:117] "RemoveContainer" containerID="d3badcdd2427fe5ba0ca4312b978810dc5785966c2898944fd4db62ba17c1b08" Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.875619 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-455hk" Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.878759 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6st6d" event={"ID":"9139575f-a07b-4c92-971d-9c0cb234c43d","Type":"ContainerDied","Data":"441d3f3fe316d3d6b93575e3b68b9da1eacb694c1e74610dd6c1a6598f1c1c0a"} Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.878821 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6st6d" Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.938047 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-455hk"] Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.945984 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-455hk"] Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.978041 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6st6d"] Jan 28 09:59:43 crc kubenswrapper[4735]: I0128 09:59:43.985488 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6st6d"] Jan 28 09:59:45 crc kubenswrapper[4735]: I0128 09:59:45.524187 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="370e2ab9-a959-4009-8f5b-8973f062cf54" path="/var/lib/kubelet/pods/370e2ab9-a959-4009-8f5b-8973f062cf54/volumes" Jan 28 09:59:45 crc kubenswrapper[4735]: I0128 09:59:45.525375 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9139575f-a07b-4c92-971d-9c0cb234c43d" path="/var/lib/kubelet/pods/9139575f-a07b-4c92-971d-9c0cb234c43d/volumes" Jan 28 09:59:46 crc kubenswrapper[4735]: I0128 09:59:46.903725 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" event={"ID":"af34ef3a-7fb5-4f63-8ff4-f57c5410a511","Type":"ContainerStarted","Data":"37cef9559db3f283ab72d6d5992712fa9bb84d2f5279aad246a50eff1d5276bb"} Jan 28 09:59:46 crc kubenswrapper[4735]: I0128 09:59:46.904214 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:46 crc kubenswrapper[4735]: I0128 09:59:46.906904 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" event={"ID":"de9e100a-9fe1-49ac-82b2-cbf314f57ca2","Type":"ContainerStarted","Data":"7466511f9bffa0806a9aef217eb1049ef1a21bbd388b928475408b8d2546790e"} Jan 28 09:59:46 crc kubenswrapper[4735]: I0128 09:59:46.908696 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:46 crc kubenswrapper[4735]: I0128 09:59:46.930248 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" podStartSLOduration=11.778349279 podStartE2EDuration="12.930219301s" podCreationTimestamp="2026-01-28 09:59:34 +0000 UTC" firstStartedPulling="2026-01-28 09:59:35.171576856 +0000 UTC m=+1028.321241229" lastFinishedPulling="2026-01-28 09:59:36.323446878 +0000 UTC m=+1029.473111251" observedRunningTime="2026-01-28 09:59:46.923880111 +0000 UTC m=+1040.073544504" watchObservedRunningTime="2026-01-28 09:59:46.930219301 +0000 UTC m=+1040.079883694" Jan 28 09:59:46 crc kubenswrapper[4735]: I0128 09:59:46.947696 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" podStartSLOduration=12.180075732 podStartE2EDuration="12.947674069s" podCreationTimestamp="2026-01-28 09:59:34 +0000 UTC" firstStartedPulling="2026-01-28 09:59:35.552647221 +0000 UTC m=+1028.702311604" lastFinishedPulling="2026-01-28 09:59:36.320245568 +0000 UTC m=+1029.469909941" observedRunningTime="2026-01-28 09:59:46.942205632 +0000 UTC m=+1040.091870005" watchObservedRunningTime="2026-01-28 09:59:46.947674069 +0000 UTC m=+1040.097338442" Jan 28 09:59:47 crc kubenswrapper[4735]: I0128 09:59:47.916070 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"dc607c73-caa4-4986-90b6-1b101b32970b","Type":"ContainerStarted","Data":"ba861864e4ef89f353369fa6027ed6793b5548f3784844632b2e6f79f1d25e7a"} Jan 28 09:59:47 crc kubenswrapper[4735]: I0128 09:59:47.916614 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"dc607c73-caa4-4986-90b6-1b101b32970b","Type":"ContainerStarted","Data":"c529563890b1c452c3372d2f558a16e2daafc67a40cf289b8c845c74eb99642a"} Jan 28 09:59:47 crc kubenswrapper[4735]: I0128 09:59:47.917689 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lh9fz" event={"ID":"f7a32b8d-eace-4a5f-b187-baf0e7d67479","Type":"ContainerStarted","Data":"b519584341325d2a2447918e58b98b0c846802fdec7c4b5da79c97a7945cc2b1"} Jan 28 09:59:47 crc kubenswrapper[4735]: I0128 09:59:47.919021 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-g77s5" event={"ID":"d10ea80e-de8b-48b7-a31c-5e8941549930","Type":"ContainerStarted","Data":"c57c8d997de6fadf836b0e1bc536f2200974793a6cd013ba1eb89ffa68836810"} Jan 28 09:59:47 crc kubenswrapper[4735]: I0128 09:59:47.921787 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"beb7a8ad-5283-42ff-85af-30eb94a8b291","Type":"ContainerStarted","Data":"52b134f6e07e3456f93653c352bd324a3142a82ee0c22eb335cfa34709501680"} Jan 28 09:59:47 crc kubenswrapper[4735]: I0128 09:59:47.923051 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c58a2fe7-0339-42bb-96f9-084e2b9a3182","Type":"ContainerStarted","Data":"c0bf3952089f45e86451887e605f0ef7c63e66e95befdecafd0d9d27e8ac34e0"} Jan 28 09:59:47 crc kubenswrapper[4735]: I0128 09:59:47.924265 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rdcgm" event={"ID":"21c583b0-22fd-4ccf-907a-1bb119119830","Type":"ContainerStarted","Data":"80b43fa42a51a4a4c00cd3c2c16da121b9fe17044a11b2548548430e1c280a4d"} Jan 28 09:59:47 crc kubenswrapper[4735]: I0128 09:59:47.924429 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-rdcgm" Jan 28 09:59:47 crc kubenswrapper[4735]: I0128 09:59:47.925786 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4af8f981-a165-4485-a3da-f6c0dfaecb00","Type":"ContainerStarted","Data":"a61f261abd38122f0dd117a8c8e53b0af901c4f79f571aad54afc35df1072136"} Jan 28 09:59:47 crc kubenswrapper[4735]: I0128 09:59:47.925830 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 09:59:47 crc kubenswrapper[4735]: I0128 09:59:47.927375 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c478e32a-3e57-49eb-8bd0-cff353ad0f56","Type":"ContainerStarted","Data":"d9b6cc0bb3dd5497e16ecf558edd2e29a05444a7591383b7144785d0e9e47468"} Jan 28 09:59:47 crc kubenswrapper[4735]: I0128 09:59:47.927486 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 28 09:59:47 crc kubenswrapper[4735]: I0128 09:59:47.929099 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4362d077-82db-43b3-8df1-b60d2d34028c","Type":"ContainerStarted","Data":"3f35686d4aadf66364d376b62fbcfce33de30ecd376b1ccc01652f2cf1857ccf"} Jan 28 09:59:47 crc kubenswrapper[4735]: I0128 09:59:47.929135 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4362d077-82db-43b3-8df1-b60d2d34028c","Type":"ContainerStarted","Data":"13be58b7e7edb71aac8360efeab5452bf417302f187e57196d1647adcc201ae0"} Jan 28 09:59:47 crc kubenswrapper[4735]: I0128 09:59:47.951721 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.395752678 podStartE2EDuration="15.951693807s" podCreationTimestamp="2026-01-28 09:59:32 +0000 UTC" firstStartedPulling="2026-01-28 09:59:34.095565489 +0000 UTC m=+1027.245229862" lastFinishedPulling="2026-01-28 09:59:46.651506618 +0000 UTC m=+1039.801170991" observedRunningTime="2026-01-28 09:59:47.935108391 +0000 UTC m=+1041.084772764" watchObservedRunningTime="2026-01-28 09:59:47.951693807 +0000 UTC m=+1041.101358200" Jan 28 09:59:47 crc kubenswrapper[4735]: I0128 09:59:47.972631 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=7.074013293 podStartE2EDuration="19.972608642s" podCreationTimestamp="2026-01-28 09:59:28 +0000 UTC" firstStartedPulling="2026-01-28 09:59:33.59892822 +0000 UTC m=+1026.748592593" lastFinishedPulling="2026-01-28 09:59:46.497523559 +0000 UTC m=+1039.647187942" observedRunningTime="2026-01-28 09:59:47.968798897 +0000 UTC m=+1041.118463260" watchObservedRunningTime="2026-01-28 09:59:47.972608642 +0000 UTC m=+1041.122273035" Jan 28 09:59:47 crc kubenswrapper[4735]: I0128 09:59:47.974247 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:48 crc kubenswrapper[4735]: I0128 09:59:48.012803 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-g77s5" podStartSLOduration=2.339887872 podStartE2EDuration="14.012734211s" podCreationTimestamp="2026-01-28 09:59:34 +0000 UTC" firstStartedPulling="2026-01-28 09:59:35.115385444 +0000 UTC m=+1028.265049817" lastFinishedPulling="2026-01-28 09:59:46.788231783 +0000 UTC m=+1039.937896156" observedRunningTime="2026-01-28 09:59:47.988510402 +0000 UTC m=+1041.138174775" watchObservedRunningTime="2026-01-28 09:59:48.012734211 +0000 UTC m=+1041.162398604" Jan 28 09:59:48 crc kubenswrapper[4735]: I0128 09:59:48.065781 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.442873591 podStartE2EDuration="25.065760683s" podCreationTimestamp="2026-01-28 09:59:23 +0000 UTC" firstStartedPulling="2026-01-28 09:59:33.541404014 +0000 UTC m=+1026.691068387" lastFinishedPulling="2026-01-28 09:59:46.164291066 +0000 UTC m=+1039.313955479" observedRunningTime="2026-01-28 09:59:48.041173105 +0000 UTC m=+1041.190837518" watchObservedRunningTime="2026-01-28 09:59:48.065760683 +0000 UTC m=+1041.215425076" Jan 28 09:59:48 crc kubenswrapper[4735]: I0128 09:59:48.103039 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.421810687 podStartE2EDuration="23.10302131s" podCreationTimestamp="2026-01-28 09:59:25 +0000 UTC" firstStartedPulling="2026-01-28 09:59:33.683509615 +0000 UTC m=+1026.833174008" lastFinishedPulling="2026-01-28 09:59:47.364720258 +0000 UTC m=+1040.514384631" observedRunningTime="2026-01-28 09:59:48.09627244 +0000 UTC m=+1041.245936833" watchObservedRunningTime="2026-01-28 09:59:48.10302131 +0000 UTC m=+1041.252685683" Jan 28 09:59:48 crc kubenswrapper[4735]: I0128 09:59:48.116234 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rdcgm" podStartSLOduration=7.64933521 podStartE2EDuration="20.116214711s" podCreationTimestamp="2026-01-28 09:59:28 +0000 UTC" firstStartedPulling="2026-01-28 09:59:33.69685895 +0000 UTC m=+1026.846523323" lastFinishedPulling="2026-01-28 09:59:46.163738411 +0000 UTC m=+1039.313402824" observedRunningTime="2026-01-28 09:59:48.114229462 +0000 UTC m=+1041.263893845" watchObservedRunningTime="2026-01-28 09:59:48.116214711 +0000 UTC m=+1041.265879094" Jan 28 09:59:48 crc kubenswrapper[4735]: I0128 09:59:48.478633 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:48 crc kubenswrapper[4735]: I0128 09:59:48.478730 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:48 crc kubenswrapper[4735]: I0128 09:59:48.936806 4735 generic.go:334] "Generic (PLEG): container finished" podID="f7a32b8d-eace-4a5f-b187-baf0e7d67479" containerID="b519584341325d2a2447918e58b98b0c846802fdec7c4b5da79c97a7945cc2b1" exitCode=0 Jan 28 09:59:48 crc kubenswrapper[4735]: I0128 09:59:48.939416 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lh9fz" event={"ID":"f7a32b8d-eace-4a5f-b187-baf0e7d67479","Type":"ContainerDied","Data":"b519584341325d2a2447918e58b98b0c846802fdec7c4b5da79c97a7945cc2b1"} Jan 28 09:59:48 crc kubenswrapper[4735]: I0128 09:59:48.939558 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"348630cf-4549-4b7a-8784-9b1640f64c39","Type":"ContainerStarted","Data":"074772d4c557adbf7d1aa70279d0557705aeb23f4edcefca4e26366446086898"} Jan 28 09:59:48 crc kubenswrapper[4735]: I0128 09:59:48.940053 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1","Type":"ContainerStarted","Data":"eeb38ab05853e87edab775732f8845a1e0767277fe834036f74ece1c3d698a90"} Jan 28 09:59:49 crc kubenswrapper[4735]: I0128 09:59:49.951637 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lh9fz" event={"ID":"f7a32b8d-eace-4a5f-b187-baf0e7d67479","Type":"ContainerStarted","Data":"8d1f1a2560ee0196c5fcf8598a048367985e1dd80e591653e3fad05901c7a819"} Jan 28 09:59:49 crc kubenswrapper[4735]: I0128 09:59:49.952193 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lh9fz" event={"ID":"f7a32b8d-eace-4a5f-b187-baf0e7d67479","Type":"ContainerStarted","Data":"5afdb50f951812340314464d3473a305cd35b5333e2c2dba7d9fbf63684624c4"} Jan 28 09:59:49 crc kubenswrapper[4735]: I0128 09:59:49.974882 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:49 crc kubenswrapper[4735]: I0128 09:59:49.977031 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-lh9fz" podStartSLOduration=9.645105458 podStartE2EDuration="21.977018007s" podCreationTimestamp="2026-01-28 09:59:28 +0000 UTC" firstStartedPulling="2026-01-28 09:59:33.715481129 +0000 UTC m=+1026.865145512" lastFinishedPulling="2026-01-28 09:59:46.047393698 +0000 UTC m=+1039.197058061" observedRunningTime="2026-01-28 09:59:49.975026307 +0000 UTC m=+1043.124690690" watchObservedRunningTime="2026-01-28 09:59:49.977018007 +0000 UTC m=+1043.126682390" Jan 28 09:59:50 crc kubenswrapper[4735]: I0128 09:59:50.965515 4735 generic.go:334] "Generic (PLEG): container finished" podID="c58a2fe7-0339-42bb-96f9-084e2b9a3182" containerID="c0bf3952089f45e86451887e605f0ef7c63e66e95befdecafd0d9d27e8ac34e0" exitCode=0 Jan 28 09:59:50 crc kubenswrapper[4735]: I0128 09:59:50.965652 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c58a2fe7-0339-42bb-96f9-084e2b9a3182","Type":"ContainerDied","Data":"c0bf3952089f45e86451887e605f0ef7c63e66e95befdecafd0d9d27e8ac34e0"} Jan 28 09:59:50 crc kubenswrapper[4735]: I0128 09:59:50.966798 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:50 crc kubenswrapper[4735]: I0128 09:59:50.966905 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 09:59:51 crc kubenswrapper[4735]: I0128 09:59:51.036724 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:51 crc kubenswrapper[4735]: E0128 09:59:51.209380 4735 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbeb7a8ad_5283_42ff_85af_30eb94a8b291.slice/crio-52b134f6e07e3456f93653c352bd324a3142a82ee0c22eb335cfa34709501680.scope\": RecentStats: unable to find data in memory cache]" Jan 28 09:59:51 crc kubenswrapper[4735]: I0128 09:59:51.542268 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:51 crc kubenswrapper[4735]: I0128 09:59:51.975700 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c58a2fe7-0339-42bb-96f9-084e2b9a3182","Type":"ContainerStarted","Data":"b95603b2168ce0ff40e22218fb2811d0c495f14a0f24f8f08da90452f04cbe26"} Jan 28 09:59:51 crc kubenswrapper[4735]: I0128 09:59:51.979231 4735 generic.go:334] "Generic (PLEG): container finished" podID="beb7a8ad-5283-42ff-85af-30eb94a8b291" containerID="52b134f6e07e3456f93653c352bd324a3142a82ee0c22eb335cfa34709501680" exitCode=0 Jan 28 09:59:51 crc kubenswrapper[4735]: I0128 09:59:51.979321 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"beb7a8ad-5283-42ff-85af-30eb94a8b291","Type":"ContainerDied","Data":"52b134f6e07e3456f93653c352bd324a3142a82ee0c22eb335cfa34709501680"} Jan 28 09:59:52 crc kubenswrapper[4735]: I0128 09:59:52.021585 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=18.7934422 podStartE2EDuration="32.021557969s" podCreationTimestamp="2026-01-28 09:59:20 +0000 UTC" firstStartedPulling="2026-01-28 09:59:32.936218618 +0000 UTC m=+1026.085882991" lastFinishedPulling="2026-01-28 09:59:46.164334377 +0000 UTC m=+1039.313998760" observedRunningTime="2026-01-28 09:59:52.015521588 +0000 UTC m=+1045.165186001" watchObservedRunningTime="2026-01-28 09:59:52.021557969 +0000 UTC m=+1045.171222382" Jan 28 09:59:52 crc kubenswrapper[4735]: I0128 09:59:52.096512 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 28 09:59:52 crc kubenswrapper[4735]: I0128 09:59:52.096624 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 28 09:59:52 crc kubenswrapper[4735]: I0128 09:59:52.989897 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"beb7a8ad-5283-42ff-85af-30eb94a8b291","Type":"ContainerStarted","Data":"bd3eeede80609440b424b91e48d4df62232b9fda8c50e539dd380b2d137773f9"} Jan 28 09:59:53 crc kubenswrapper[4735]: I0128 09:59:53.010497 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=16.331546414 podStartE2EDuration="31.010484357s" podCreationTimestamp="2026-01-28 09:59:22 +0000 UTC" firstStartedPulling="2026-01-28 09:59:31.817387666 +0000 UTC m=+1024.967052039" lastFinishedPulling="2026-01-28 09:59:46.496325599 +0000 UTC m=+1039.645989982" observedRunningTime="2026-01-28 09:59:53.009099413 +0000 UTC m=+1046.158763826" watchObservedRunningTime="2026-01-28 09:59:53.010484357 +0000 UTC m=+1046.160148730" Jan 28 09:59:53 crc kubenswrapper[4735]: I0128 09:59:53.538244 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 28 09:59:53 crc kubenswrapper[4735]: I0128 09:59:53.549999 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:53 crc kubenswrapper[4735]: I0128 09:59:53.550166 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:53 crc kubenswrapper[4735]: I0128 09:59:53.835152 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 28 09:59:54 crc kubenswrapper[4735]: I0128 09:59:54.710483 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:54 crc kubenswrapper[4735]: I0128 09:59:54.960819 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.011547 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-trzfc"] Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.011766 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" podUID="af34ef3a-7fb5-4f63-8ff4-f57c5410a511" containerName="dnsmasq-dns" containerID="cri-o://37cef9559db3f283ab72d6d5992712fa9bb84d2f5279aad246a50eff1d5276bb" gracePeriod=10 Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.039364 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.251671 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 28 09:59:55 crc kubenswrapper[4735]: E0128 09:59:55.252002 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a4167f-5e4f-4110-8a0f-6aeb34dd4034" containerName="init" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.252019 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a4167f-5e4f-4110-8a0f-6aeb34dd4034" containerName="init" Jan 28 09:59:55 crc kubenswrapper[4735]: E0128 09:59:55.252059 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="370e2ab9-a959-4009-8f5b-8973f062cf54" containerName="init" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.252066 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="370e2ab9-a959-4009-8f5b-8973f062cf54" containerName="init" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.252224 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a4167f-5e4f-4110-8a0f-6aeb34dd4034" containerName="init" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.252238 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="370e2ab9-a959-4009-8f5b-8973f062cf54" containerName="init" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.253012 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.257965 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.259173 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9pz4b" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.259574 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.259877 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.280185 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.339006 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09a13939-6aac-486c-a541-5c3683bff770-config\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.339077 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a13939-6aac-486c-a541-5c3683bff770-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.339110 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a13939-6aac-486c-a541-5c3683bff770-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.339152 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/09a13939-6aac-486c-a541-5c3683bff770-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.339174 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a13939-6aac-486c-a541-5c3683bff770-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.339198 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96466\" (UniqueName: \"kubernetes.io/projected/09a13939-6aac-486c-a541-5c3683bff770-kube-api-access-96466\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.339235 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09a13939-6aac-486c-a541-5c3683bff770-scripts\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.443628 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09a13939-6aac-486c-a541-5c3683bff770-config\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.443918 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a13939-6aac-486c-a541-5c3683bff770-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.443965 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a13939-6aac-486c-a541-5c3683bff770-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.444003 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/09a13939-6aac-486c-a541-5c3683bff770-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.444034 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a13939-6aac-486c-a541-5c3683bff770-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.444064 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96466\" (UniqueName: \"kubernetes.io/projected/09a13939-6aac-486c-a541-5c3683bff770-kube-api-access-96466\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.444103 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09a13939-6aac-486c-a541-5c3683bff770-scripts\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.444555 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/09a13939-6aac-486c-a541-5c3683bff770-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.445101 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09a13939-6aac-486c-a541-5c3683bff770-scripts\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.445178 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.445491 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09a13939-6aac-486c-a541-5c3683bff770-config\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.455722 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a13939-6aac-486c-a541-5c3683bff770-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.456303 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a13939-6aac-486c-a541-5c3683bff770-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.459590 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a13939-6aac-486c-a541-5c3683bff770-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.472674 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96466\" (UniqueName: \"kubernetes.io/projected/09a13939-6aac-486c-a541-5c3683bff770-kube-api-access-96466\") pod \"ovn-northd-0\" (UID: \"09a13939-6aac-486c-a541-5c3683bff770\") " pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.544984 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-config\") pod \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\" (UID: \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\") " Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.545059 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-dns-svc\") pod \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\" (UID: \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\") " Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.545105 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzrdg\" (UniqueName: \"kubernetes.io/projected/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-kube-api-access-kzrdg\") pod \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\" (UID: \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\") " Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.545234 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-ovsdbserver-nb\") pod \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\" (UID: \"af34ef3a-7fb5-4f63-8ff4-f57c5410a511\") " Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.556997 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-kube-api-access-kzrdg" (OuterVolumeSpecName: "kube-api-access-kzrdg") pod "af34ef3a-7fb5-4f63-8ff4-f57c5410a511" (UID: "af34ef3a-7fb5-4f63-8ff4-f57c5410a511"). InnerVolumeSpecName "kube-api-access-kzrdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.592171 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.626490 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "af34ef3a-7fb5-4f63-8ff4-f57c5410a511" (UID: "af34ef3a-7fb5-4f63-8ff4-f57c5410a511"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.633712 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-config" (OuterVolumeSpecName: "config") pod "af34ef3a-7fb5-4f63-8ff4-f57c5410a511" (UID: "af34ef3a-7fb5-4f63-8ff4-f57c5410a511"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.648293 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "af34ef3a-7fb5-4f63-8ff4-f57c5410a511" (UID: "af34ef3a-7fb5-4f63-8ff4-f57c5410a511"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.653787 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.653831 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzrdg\" (UniqueName: \"kubernetes.io/projected/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-kube-api-access-kzrdg\") on node \"crc\" DevicePath \"\"" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.653846 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.653859 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af34ef3a-7fb5-4f63-8ff4-f57c5410a511-config\") on node \"crc\" DevicePath \"\"" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.782420 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-hkwcb"] Jan 28 09:59:55 crc kubenswrapper[4735]: E0128 09:59:55.782992 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af34ef3a-7fb5-4f63-8ff4-f57c5410a511" containerName="dnsmasq-dns" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.783003 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="af34ef3a-7fb5-4f63-8ff4-f57c5410a511" containerName="dnsmasq-dns" Jan 28 09:59:55 crc kubenswrapper[4735]: E0128 09:59:55.783015 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af34ef3a-7fb5-4f63-8ff4-f57c5410a511" containerName="init" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.783020 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="af34ef3a-7fb5-4f63-8ff4-f57c5410a511" containerName="init" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.783149 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="af34ef3a-7fb5-4f63-8ff4-f57c5410a511" containerName="dnsmasq-dns" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.786240 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.792533 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.860146 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-dns-svc\") pod \"dnsmasq-dns-698758b865-hkwcb\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.860188 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rlnx\" (UniqueName: \"kubernetes.io/projected/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-kube-api-access-6rlnx\") pod \"dnsmasq-dns-698758b865-hkwcb\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.860253 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hkwcb\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.860312 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hkwcb\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.860330 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-config\") pod \"dnsmasq-dns-698758b865-hkwcb\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.901025 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hkwcb"] Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.963686 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hkwcb\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.964128 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hkwcb\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.964160 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-config\") pod \"dnsmasq-dns-698758b865-hkwcb\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.964228 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-dns-svc\") pod \"dnsmasq-dns-698758b865-hkwcb\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.964260 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rlnx\" (UniqueName: \"kubernetes.io/projected/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-kube-api-access-6rlnx\") pod \"dnsmasq-dns-698758b865-hkwcb\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.964725 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hkwcb\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.964853 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hkwcb\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.965500 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-config\") pod \"dnsmasq-dns-698758b865-hkwcb\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.965710 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-dns-svc\") pod \"dnsmasq-dns-698758b865-hkwcb\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:55 crc kubenswrapper[4735]: I0128 09:59:55.985705 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rlnx\" (UniqueName: \"kubernetes.io/projected/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-kube-api-access-6rlnx\") pod \"dnsmasq-dns-698758b865-hkwcb\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.021194 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.021264 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" event={"ID":"af34ef3a-7fb5-4f63-8ff4-f57c5410a511","Type":"ContainerDied","Data":"37cef9559db3f283ab72d6d5992712fa9bb84d2f5279aad246a50eff1d5276bb"} Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.021331 4735 scope.go:117] "RemoveContainer" containerID="37cef9559db3f283ab72d6d5992712fa9bb84d2f5279aad246a50eff1d5276bb" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.021274 4735 generic.go:334] "Generic (PLEG): container finished" podID="af34ef3a-7fb5-4f63-8ff4-f57c5410a511" containerID="37cef9559db3f283ab72d6d5992712fa9bb84d2f5279aad246a50eff1d5276bb" exitCode=0 Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.023343 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-trzfc" event={"ID":"af34ef3a-7fb5-4f63-8ff4-f57c5410a511","Type":"ContainerDied","Data":"e0032e05b66edf528a6c523e0425d5c480242b727e8f5451627af64b027115dc"} Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.049831 4735 scope.go:117] "RemoveContainer" containerID="7fc18085f2458e82a6138656022a95f89db09bce605ca2dfe981339275dcd3af" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.075060 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-trzfc"] Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.086571 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-trzfc"] Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.086900 4735 scope.go:117] "RemoveContainer" containerID="37cef9559db3f283ab72d6d5992712fa9bb84d2f5279aad246a50eff1d5276bb" Jan 28 09:59:56 crc kubenswrapper[4735]: E0128 09:59:56.087278 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37cef9559db3f283ab72d6d5992712fa9bb84d2f5279aad246a50eff1d5276bb\": container with ID starting with 37cef9559db3f283ab72d6d5992712fa9bb84d2f5279aad246a50eff1d5276bb not found: ID does not exist" containerID="37cef9559db3f283ab72d6d5992712fa9bb84d2f5279aad246a50eff1d5276bb" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.087302 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37cef9559db3f283ab72d6d5992712fa9bb84d2f5279aad246a50eff1d5276bb"} err="failed to get container status \"37cef9559db3f283ab72d6d5992712fa9bb84d2f5279aad246a50eff1d5276bb\": rpc error: code = NotFound desc = could not find container \"37cef9559db3f283ab72d6d5992712fa9bb84d2f5279aad246a50eff1d5276bb\": container with ID starting with 37cef9559db3f283ab72d6d5992712fa9bb84d2f5279aad246a50eff1d5276bb not found: ID does not exist" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.087343 4735 scope.go:117] "RemoveContainer" containerID="7fc18085f2458e82a6138656022a95f89db09bce605ca2dfe981339275dcd3af" Jan 28 09:59:56 crc kubenswrapper[4735]: E0128 09:59:56.087603 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fc18085f2458e82a6138656022a95f89db09bce605ca2dfe981339275dcd3af\": container with ID starting with 7fc18085f2458e82a6138656022a95f89db09bce605ca2dfe981339275dcd3af not found: ID does not exist" containerID="7fc18085f2458e82a6138656022a95f89db09bce605ca2dfe981339275dcd3af" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.087624 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fc18085f2458e82a6138656022a95f89db09bce605ca2dfe981339275dcd3af"} err="failed to get container status \"7fc18085f2458e82a6138656022a95f89db09bce605ca2dfe981339275dcd3af\": rpc error: code = NotFound desc = could not find container \"7fc18085f2458e82a6138656022a95f89db09bce605ca2dfe981339275dcd3af\": container with ID starting with 7fc18085f2458e82a6138656022a95f89db09bce605ca2dfe981339275dcd3af not found: ID does not exist" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.124272 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.198283 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.297365 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.352173 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.606282 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hkwcb"] Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.874508 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.882082 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.889224 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.889949 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.890131 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.897436 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.914004 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-9wjhs" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.981986 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/718744c9-5139-48b6-bf2c-7a29489d0fa7-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.982245 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/718744c9-5139-48b6-bf2c-7a29489d0fa7-cache\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.982375 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/718744c9-5139-48b6-bf2c-7a29489d0fa7-lock\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.982469 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb8fc\" (UniqueName: \"kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-kube-api-access-lb8fc\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.982560 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:56 crc kubenswrapper[4735]: I0128 09:59:56.982741 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.030802 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"09a13939-6aac-486c-a541-5c3683bff770","Type":"ContainerStarted","Data":"f27b64c7e15354a29878c9e739fcf9b21bcd5b87b77e9823e305738a528ac294"} Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.032445 4735 generic.go:334] "Generic (PLEG): container finished" podID="902423c1-75ea-4c5a-87d3-d77dbdbc6d85" containerID="e74694f6382344cbe432bb1be2091d9cbd137f0183158a3439adafcf4945f272" exitCode=0 Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.032502 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hkwcb" event={"ID":"902423c1-75ea-4c5a-87d3-d77dbdbc6d85","Type":"ContainerDied","Data":"e74694f6382344cbe432bb1be2091d9cbd137f0183158a3439adafcf4945f272"} Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.032518 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hkwcb" event={"ID":"902423c1-75ea-4c5a-87d3-d77dbdbc6d85","Type":"ContainerStarted","Data":"5f7d30ecb701214977fb7da4f0fe1e7f95a78ce62c558016c0e6e1be939956af"} Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.087358 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.088137 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/718744c9-5139-48b6-bf2c-7a29489d0fa7-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.088268 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/718744c9-5139-48b6-bf2c-7a29489d0fa7-cache\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.088299 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/718744c9-5139-48b6-bf2c-7a29489d0fa7-lock\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.088324 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.088344 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb8fc\" (UniqueName: \"kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-kube-api-access-lb8fc\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.088998 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.090325 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/718744c9-5139-48b6-bf2c-7a29489d0fa7-cache\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:57 crc kubenswrapper[4735]: E0128 09:59:57.091524 4735 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 09:59:57 crc kubenswrapper[4735]: E0128 09:59:57.091547 4735 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 09:59:57 crc kubenswrapper[4735]: E0128 09:59:57.091616 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift podName:718744c9-5139-48b6-bf2c-7a29489d0fa7 nodeName:}" failed. No retries permitted until 2026-01-28 09:59:57.591600001 +0000 UTC m=+1050.741264374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift") pod "swift-storage-0" (UID: "718744c9-5139-48b6-bf2c-7a29489d0fa7") : configmap "swift-ring-files" not found Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.091950 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/718744c9-5139-48b6-bf2c-7a29489d0fa7-lock\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.095825 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/718744c9-5139-48b6-bf2c-7a29489d0fa7-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.106272 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb8fc\" (UniqueName: \"kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-kube-api-access-lb8fc\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.119966 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.441388 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-h9q76"] Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.442633 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.444998 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.445102 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.445169 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.454073 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-h9q76"] Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.497924 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-combined-ca-bundle\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.497982 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-swiftconf\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.498045 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8f9e278a-14f9-4162-acea-69717d616573-ring-data-devices\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.498129 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2hmp\" (UniqueName: \"kubernetes.io/projected/8f9e278a-14f9-4162-acea-69717d616573-kube-api-access-g2hmp\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.498174 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-dispersionconf\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.498210 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f9e278a-14f9-4162-acea-69717d616573-scripts\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.498235 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8f9e278a-14f9-4162-acea-69717d616573-etc-swift\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.512450 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af34ef3a-7fb5-4f63-8ff4-f57c5410a511" path="/var/lib/kubelet/pods/af34ef3a-7fb5-4f63-8ff4-f57c5410a511/volumes" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.599131 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-dispersionconf\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.599181 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f9e278a-14f9-4162-acea-69717d616573-scripts\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.599198 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8f9e278a-14f9-4162-acea-69717d616573-etc-swift\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.599297 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-combined-ca-bundle\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.599316 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-swiftconf\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.599352 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8f9e278a-14f9-4162-acea-69717d616573-ring-data-devices\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.599421 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.599452 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2hmp\" (UniqueName: \"kubernetes.io/projected/8f9e278a-14f9-4162-acea-69717d616573-kube-api-access-g2hmp\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: E0128 09:59:57.599951 4735 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 09:59:57 crc kubenswrapper[4735]: E0128 09:59:57.599980 4735 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 09:59:57 crc kubenswrapper[4735]: E0128 09:59:57.600038 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift podName:718744c9-5139-48b6-bf2c-7a29489d0fa7 nodeName:}" failed. No retries permitted until 2026-01-28 09:59:58.600017246 +0000 UTC m=+1051.749681619 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift") pod "swift-storage-0" (UID: "718744c9-5139-48b6-bf2c-7a29489d0fa7") : configmap "swift-ring-files" not found Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.600196 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f9e278a-14f9-4162-acea-69717d616573-scripts\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.600495 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8f9e278a-14f9-4162-acea-69717d616573-etc-swift\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.601171 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8f9e278a-14f9-4162-acea-69717d616573-ring-data-devices\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.611544 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-dispersionconf\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.611639 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-swiftconf\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.611915 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-combined-ca-bundle\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.615452 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2hmp\" (UniqueName: \"kubernetes.io/projected/8f9e278a-14f9-4162-acea-69717d616573-kube-api-access-g2hmp\") pod \"swift-ring-rebalance-h9q76\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:57 crc kubenswrapper[4735]: I0128 09:59:57.771353 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-h9q76" Jan 28 09:59:58 crc kubenswrapper[4735]: I0128 09:59:58.042180 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"09a13939-6aac-486c-a541-5c3683bff770","Type":"ContainerStarted","Data":"e305d7b4716be747426f19585cbacb4cdff37e5200d874839de3af457c4815a0"} Jan 28 09:59:58 crc kubenswrapper[4735]: I0128 09:59:58.042527 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 28 09:59:58 crc kubenswrapper[4735]: I0128 09:59:58.042539 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"09a13939-6aac-486c-a541-5c3683bff770","Type":"ContainerStarted","Data":"d041c5fe917882b73741730be2b150ac81bc0ff7ba1397c6d86bbab56dcdcfba"} Jan 28 09:59:58 crc kubenswrapper[4735]: I0128 09:59:58.044299 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hkwcb" event={"ID":"902423c1-75ea-4c5a-87d3-d77dbdbc6d85","Type":"ContainerStarted","Data":"9330000dbe84fdc05c70e8cf182bd95e084855522c5e98afe2982f7c1b6cfed5"} Jan 28 09:59:58 crc kubenswrapper[4735]: I0128 09:59:58.045141 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 09:59:58 crc kubenswrapper[4735]: I0128 09:59:58.070146 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.728285364 podStartE2EDuration="3.070128499s" podCreationTimestamp="2026-01-28 09:59:55 +0000 UTC" firstStartedPulling="2026-01-28 09:59:56.36160058 +0000 UTC m=+1049.511264953" lastFinishedPulling="2026-01-28 09:59:57.703443715 +0000 UTC m=+1050.853108088" observedRunningTime="2026-01-28 09:59:58.058571088 +0000 UTC m=+1051.208235521" watchObservedRunningTime="2026-01-28 09:59:58.070128499 +0000 UTC m=+1051.219792872" Jan 28 09:59:58 crc kubenswrapper[4735]: I0128 09:59:58.083221 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-hkwcb" podStartSLOduration=3.083203228 podStartE2EDuration="3.083203228s" podCreationTimestamp="2026-01-28 09:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 09:59:58.077120744 +0000 UTC m=+1051.226785117" watchObservedRunningTime="2026-01-28 09:59:58.083203228 +0000 UTC m=+1051.232867601" Jan 28 09:59:58 crc kubenswrapper[4735]: I0128 09:59:58.204022 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-h9q76"] Jan 28 09:59:58 crc kubenswrapper[4735]: I0128 09:59:58.618470 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 09:59:58 crc kubenswrapper[4735]: E0128 09:59:58.618658 4735 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 09:59:58 crc kubenswrapper[4735]: E0128 09:59:58.619187 4735 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 09:59:58 crc kubenswrapper[4735]: E0128 09:59:58.619239 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift podName:718744c9-5139-48b6-bf2c-7a29489d0fa7 nodeName:}" failed. No retries permitted until 2026-01-28 10:00:00.619219725 +0000 UTC m=+1053.768884158 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift") pod "swift-storage-0" (UID: "718744c9-5139-48b6-bf2c-7a29489d0fa7") : configmap "swift-ring-files" not found Jan 28 09:59:59 crc kubenswrapper[4735]: I0128 09:59:59.055672 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-h9q76" event={"ID":"8f9e278a-14f9-4162-acea-69717d616573","Type":"ContainerStarted","Data":"c56f1c059f780acf885ba6d991295c75b7cf3f44b8227093a98b6f0cf2348901"} Jan 28 09:59:59 crc kubenswrapper[4735]: I0128 09:59:59.746102 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 28 09:59:59 crc kubenswrapper[4735]: I0128 09:59:59.834463 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.148459 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh"] Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.150162 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.154119 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.154319 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.158936 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh"] Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.247014 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b969dfd-602d-47fb-a56e-f059eb358442-secret-volume\") pod \"collect-profiles-29493240-8spzh\" (UID: \"9b969dfd-602d-47fb-a56e-f059eb358442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.247055 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b969dfd-602d-47fb-a56e-f059eb358442-config-volume\") pod \"collect-profiles-29493240-8spzh\" (UID: \"9b969dfd-602d-47fb-a56e-f059eb358442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.247079 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwn2w\" (UniqueName: \"kubernetes.io/projected/9b969dfd-602d-47fb-a56e-f059eb358442-kube-api-access-jwn2w\") pod \"collect-profiles-29493240-8spzh\" (UID: \"9b969dfd-602d-47fb-a56e-f059eb358442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.349085 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b969dfd-602d-47fb-a56e-f059eb358442-secret-volume\") pod \"collect-profiles-29493240-8spzh\" (UID: \"9b969dfd-602d-47fb-a56e-f059eb358442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.349120 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b969dfd-602d-47fb-a56e-f059eb358442-config-volume\") pod \"collect-profiles-29493240-8spzh\" (UID: \"9b969dfd-602d-47fb-a56e-f059eb358442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.349158 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwn2w\" (UniqueName: \"kubernetes.io/projected/9b969dfd-602d-47fb-a56e-f059eb358442-kube-api-access-jwn2w\") pod \"collect-profiles-29493240-8spzh\" (UID: \"9b969dfd-602d-47fb-a56e-f059eb358442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.350435 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b969dfd-602d-47fb-a56e-f059eb358442-config-volume\") pod \"collect-profiles-29493240-8spzh\" (UID: \"9b969dfd-602d-47fb-a56e-f059eb358442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.355378 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b969dfd-602d-47fb-a56e-f059eb358442-secret-volume\") pod \"collect-profiles-29493240-8spzh\" (UID: \"9b969dfd-602d-47fb-a56e-f059eb358442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.366250 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwn2w\" (UniqueName: \"kubernetes.io/projected/9b969dfd-602d-47fb-a56e-f059eb358442-kube-api-access-jwn2w\") pod \"collect-profiles-29493240-8spzh\" (UID: \"9b969dfd-602d-47fb-a56e-f059eb358442\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.478160 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.653642 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 10:00:00 crc kubenswrapper[4735]: E0128 10:00:00.653863 4735 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 10:00:00 crc kubenswrapper[4735]: E0128 10:00:00.653881 4735 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 10:00:00 crc kubenswrapper[4735]: E0128 10:00:00.653931 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift podName:718744c9-5139-48b6-bf2c-7a29489d0fa7 nodeName:}" failed. No retries permitted until 2026-01-28 10:00:04.653914061 +0000 UTC m=+1057.803578444 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift") pod "swift-storage-0" (UID: "718744c9-5139-48b6-bf2c-7a29489d0fa7") : configmap "swift-ring-files" not found Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.816364 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-jjxq7"] Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.818678 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jjxq7" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.821569 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.825801 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jjxq7"] Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.857440 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0c3396c-aa5e-4763-a07f-8a05d5e72c9f-operator-scripts\") pod \"root-account-create-update-jjxq7\" (UID: \"e0c3396c-aa5e-4763-a07f-8a05d5e72c9f\") " pod="openstack/root-account-create-update-jjxq7" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.857505 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmstq\" (UniqueName: \"kubernetes.io/projected/e0c3396c-aa5e-4763-a07f-8a05d5e72c9f-kube-api-access-wmstq\") pod \"root-account-create-update-jjxq7\" (UID: \"e0c3396c-aa5e-4763-a07f-8a05d5e72c9f\") " pod="openstack/root-account-create-update-jjxq7" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.959121 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0c3396c-aa5e-4763-a07f-8a05d5e72c9f-operator-scripts\") pod \"root-account-create-update-jjxq7\" (UID: \"e0c3396c-aa5e-4763-a07f-8a05d5e72c9f\") " pod="openstack/root-account-create-update-jjxq7" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.959202 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmstq\" (UniqueName: \"kubernetes.io/projected/e0c3396c-aa5e-4763-a07f-8a05d5e72c9f-kube-api-access-wmstq\") pod \"root-account-create-update-jjxq7\" (UID: \"e0c3396c-aa5e-4763-a07f-8a05d5e72c9f\") " pod="openstack/root-account-create-update-jjxq7" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.959996 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0c3396c-aa5e-4763-a07f-8a05d5e72c9f-operator-scripts\") pod \"root-account-create-update-jjxq7\" (UID: \"e0c3396c-aa5e-4763-a07f-8a05d5e72c9f\") " pod="openstack/root-account-create-update-jjxq7" Jan 28 10:00:00 crc kubenswrapper[4735]: I0128 10:00:00.977616 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmstq\" (UniqueName: \"kubernetes.io/projected/e0c3396c-aa5e-4763-a07f-8a05d5e72c9f-kube-api-access-wmstq\") pod \"root-account-create-update-jjxq7\" (UID: \"e0c3396c-aa5e-4763-a07f-8a05d5e72c9f\") " pod="openstack/root-account-create-update-jjxq7" Jan 28 10:00:01 crc kubenswrapper[4735]: I0128 10:00:01.137107 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jjxq7" Jan 28 10:00:01 crc kubenswrapper[4735]: W0128 10:00:01.872071 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b969dfd_602d_47fb_a56e_f059eb358442.slice/crio-86eb0777bae6d3e68f4434b9dbfe41a84fae8c37b3f4cdb29e93bb3a1aa68d5c WatchSource:0}: Error finding container 86eb0777bae6d3e68f4434b9dbfe41a84fae8c37b3f4cdb29e93bb3a1aa68d5c: Status 404 returned error can't find the container with id 86eb0777bae6d3e68f4434b9dbfe41a84fae8c37b3f4cdb29e93bb3a1aa68d5c Jan 28 10:00:01 crc kubenswrapper[4735]: I0128 10:00:01.881067 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh"] Jan 28 10:00:01 crc kubenswrapper[4735]: I0128 10:00:01.948795 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jjxq7"] Jan 28 10:00:02 crc kubenswrapper[4735]: I0128 10:00:02.108639 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jjxq7" event={"ID":"e0c3396c-aa5e-4763-a07f-8a05d5e72c9f","Type":"ContainerStarted","Data":"8f82604ff215761dc2365fd4a43142e85dff4268069fd592cb6abf53c15ed0e6"} Jan 28 10:00:02 crc kubenswrapper[4735]: I0128 10:00:02.120178 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-h9q76" event={"ID":"8f9e278a-14f9-4162-acea-69717d616573","Type":"ContainerStarted","Data":"8b24bb5e4a3757e51f6635e0461cfba7b28affe4f65a65ee279b84b0642bcc06"} Jan 28 10:00:02 crc kubenswrapper[4735]: I0128 10:00:02.123605 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" event={"ID":"9b969dfd-602d-47fb-a56e-f059eb358442","Type":"ContainerStarted","Data":"d52b8c76e71754a0eb32594a68f868bbe1bfee5273e25a2e13649945f3056fbb"} Jan 28 10:00:02 crc kubenswrapper[4735]: I0128 10:00:02.123673 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" event={"ID":"9b969dfd-602d-47fb-a56e-f059eb358442","Type":"ContainerStarted","Data":"86eb0777bae6d3e68f4434b9dbfe41a84fae8c37b3f4cdb29e93bb3a1aa68d5c"} Jan 28 10:00:02 crc kubenswrapper[4735]: I0128 10:00:02.144900 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-h9q76" podStartSLOduration=1.90433138 podStartE2EDuration="5.144877853s" podCreationTimestamp="2026-01-28 09:59:57 +0000 UTC" firstStartedPulling="2026-01-28 09:59:58.20506907 +0000 UTC m=+1051.354733453" lastFinishedPulling="2026-01-28 10:00:01.445615553 +0000 UTC m=+1054.595279926" observedRunningTime="2026-01-28 10:00:02.13639051 +0000 UTC m=+1055.286054883" watchObservedRunningTime="2026-01-28 10:00:02.144877853 +0000 UTC m=+1055.294542236" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.138204 4735 generic.go:334] "Generic (PLEG): container finished" podID="e0c3396c-aa5e-4763-a07f-8a05d5e72c9f" containerID="b8bfd489d50d55933182c7098197cd6d387579ab42190cfec71fb6a4b17a8b2d" exitCode=0 Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.138298 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jjxq7" event={"ID":"e0c3396c-aa5e-4763-a07f-8a05d5e72c9f","Type":"ContainerDied","Data":"b8bfd489d50d55933182c7098197cd6d387579ab42190cfec71fb6a4b17a8b2d"} Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.141175 4735 generic.go:334] "Generic (PLEG): container finished" podID="9b969dfd-602d-47fb-a56e-f059eb358442" containerID="d52b8c76e71754a0eb32594a68f868bbe1bfee5273e25a2e13649945f3056fbb" exitCode=0 Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.141229 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" event={"ID":"9b969dfd-602d-47fb-a56e-f059eb358442","Type":"ContainerDied","Data":"d52b8c76e71754a0eb32594a68f868bbe1bfee5273e25a2e13649945f3056fbb"} Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.165159 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" podStartSLOduration=3.165085448 podStartE2EDuration="3.165085448s" podCreationTimestamp="2026-01-28 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:00:02.154079125 +0000 UTC m=+1055.303743498" watchObservedRunningTime="2026-01-28 10:00:03.165085448 +0000 UTC m=+1056.314749851" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.407084 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-jn5hv"] Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.408650 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jn5hv" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.419225 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5c01-account-create-update-6dxpf"] Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.420468 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c01-account-create-update-6dxpf" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.427119 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.443087 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jn5hv"] Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.452038 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c01-account-create-update-6dxpf"] Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.519604 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef60663-8c16-49a5-84ad-2d59c0972f72-operator-scripts\") pod \"keystone-5c01-account-create-update-6dxpf\" (UID: \"5ef60663-8c16-49a5-84ad-2d59c0972f72\") " pod="openstack/keystone-5c01-account-create-update-6dxpf" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.519681 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f6zz\" (UniqueName: \"kubernetes.io/projected/3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345-kube-api-access-9f6zz\") pod \"keystone-db-create-jn5hv\" (UID: \"3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345\") " pod="openstack/keystone-db-create-jn5hv" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.519702 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnllg\" (UniqueName: \"kubernetes.io/projected/5ef60663-8c16-49a5-84ad-2d59c0972f72-kube-api-access-bnllg\") pod \"keystone-5c01-account-create-update-6dxpf\" (UID: \"5ef60663-8c16-49a5-84ad-2d59c0972f72\") " pod="openstack/keystone-5c01-account-create-update-6dxpf" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.519725 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345-operator-scripts\") pod \"keystone-db-create-jn5hv\" (UID: \"3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345\") " pod="openstack/keystone-db-create-jn5hv" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.621585 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef60663-8c16-49a5-84ad-2d59c0972f72-operator-scripts\") pod \"keystone-5c01-account-create-update-6dxpf\" (UID: \"5ef60663-8c16-49a5-84ad-2d59c0972f72\") " pod="openstack/keystone-5c01-account-create-update-6dxpf" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.621704 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f6zz\" (UniqueName: \"kubernetes.io/projected/3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345-kube-api-access-9f6zz\") pod \"keystone-db-create-jn5hv\" (UID: \"3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345\") " pod="openstack/keystone-db-create-jn5hv" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.621733 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnllg\" (UniqueName: \"kubernetes.io/projected/5ef60663-8c16-49a5-84ad-2d59c0972f72-kube-api-access-bnllg\") pod \"keystone-5c01-account-create-update-6dxpf\" (UID: \"5ef60663-8c16-49a5-84ad-2d59c0972f72\") " pod="openstack/keystone-5c01-account-create-update-6dxpf" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.621998 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345-operator-scripts\") pod \"keystone-db-create-jn5hv\" (UID: \"3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345\") " pod="openstack/keystone-db-create-jn5hv" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.622349 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef60663-8c16-49a5-84ad-2d59c0972f72-operator-scripts\") pod \"keystone-5c01-account-create-update-6dxpf\" (UID: \"5ef60663-8c16-49a5-84ad-2d59c0972f72\") " pod="openstack/keystone-5c01-account-create-update-6dxpf" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.622651 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345-operator-scripts\") pod \"keystone-db-create-jn5hv\" (UID: \"3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345\") " pod="openstack/keystone-db-create-jn5hv" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.639622 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f6zz\" (UniqueName: \"kubernetes.io/projected/3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345-kube-api-access-9f6zz\") pod \"keystone-db-create-jn5hv\" (UID: \"3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345\") " pod="openstack/keystone-db-create-jn5hv" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.640105 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnllg\" (UniqueName: \"kubernetes.io/projected/5ef60663-8c16-49a5-84ad-2d59c0972f72-kube-api-access-bnllg\") pod \"keystone-5c01-account-create-update-6dxpf\" (UID: \"5ef60663-8c16-49a5-84ad-2d59c0972f72\") " pod="openstack/keystone-5c01-account-create-update-6dxpf" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.673338 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-2vzq6"] Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.675454 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2vzq6" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.686832 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2vzq6"] Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.723411 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r8kd\" (UniqueName: \"kubernetes.io/projected/794d2867-cc06-44ad-82a0-778faaaa98f3-kube-api-access-9r8kd\") pod \"placement-db-create-2vzq6\" (UID: \"794d2867-cc06-44ad-82a0-778faaaa98f3\") " pod="openstack/placement-db-create-2vzq6" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.723479 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/794d2867-cc06-44ad-82a0-778faaaa98f3-operator-scripts\") pod \"placement-db-create-2vzq6\" (UID: \"794d2867-cc06-44ad-82a0-778faaaa98f3\") " pod="openstack/placement-db-create-2vzq6" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.734502 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jn5hv" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.744358 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c01-account-create-update-6dxpf" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.781823 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-438a-account-create-update-c6phc"] Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.782809 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-438a-account-create-update-c6phc" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.785195 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.795003 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-438a-account-create-update-c6phc"] Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.824895 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjsr4\" (UniqueName: \"kubernetes.io/projected/84ab8ffc-62a4-4091-abd1-25082ecf0de9-kube-api-access-hjsr4\") pod \"placement-438a-account-create-update-c6phc\" (UID: \"84ab8ffc-62a4-4091-abd1-25082ecf0de9\") " pod="openstack/placement-438a-account-create-update-c6phc" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.824992 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84ab8ffc-62a4-4091-abd1-25082ecf0de9-operator-scripts\") pod \"placement-438a-account-create-update-c6phc\" (UID: \"84ab8ffc-62a4-4091-abd1-25082ecf0de9\") " pod="openstack/placement-438a-account-create-update-c6phc" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.825043 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r8kd\" (UniqueName: \"kubernetes.io/projected/794d2867-cc06-44ad-82a0-778faaaa98f3-kube-api-access-9r8kd\") pod \"placement-db-create-2vzq6\" (UID: \"794d2867-cc06-44ad-82a0-778faaaa98f3\") " pod="openstack/placement-db-create-2vzq6" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.825134 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/794d2867-cc06-44ad-82a0-778faaaa98f3-operator-scripts\") pod \"placement-db-create-2vzq6\" (UID: \"794d2867-cc06-44ad-82a0-778faaaa98f3\") " pod="openstack/placement-db-create-2vzq6" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.826930 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/794d2867-cc06-44ad-82a0-778faaaa98f3-operator-scripts\") pod \"placement-db-create-2vzq6\" (UID: \"794d2867-cc06-44ad-82a0-778faaaa98f3\") " pod="openstack/placement-db-create-2vzq6" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.847261 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r8kd\" (UniqueName: \"kubernetes.io/projected/794d2867-cc06-44ad-82a0-778faaaa98f3-kube-api-access-9r8kd\") pod \"placement-db-create-2vzq6\" (UID: \"794d2867-cc06-44ad-82a0-778faaaa98f3\") " pod="openstack/placement-db-create-2vzq6" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.927312 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84ab8ffc-62a4-4091-abd1-25082ecf0de9-operator-scripts\") pod \"placement-438a-account-create-update-c6phc\" (UID: \"84ab8ffc-62a4-4091-abd1-25082ecf0de9\") " pod="openstack/placement-438a-account-create-update-c6phc" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.927797 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjsr4\" (UniqueName: \"kubernetes.io/projected/84ab8ffc-62a4-4091-abd1-25082ecf0de9-kube-api-access-hjsr4\") pod \"placement-438a-account-create-update-c6phc\" (UID: \"84ab8ffc-62a4-4091-abd1-25082ecf0de9\") " pod="openstack/placement-438a-account-create-update-c6phc" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.928673 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84ab8ffc-62a4-4091-abd1-25082ecf0de9-operator-scripts\") pod \"placement-438a-account-create-update-c6phc\" (UID: \"84ab8ffc-62a4-4091-abd1-25082ecf0de9\") " pod="openstack/placement-438a-account-create-update-c6phc" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.946322 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjsr4\" (UniqueName: \"kubernetes.io/projected/84ab8ffc-62a4-4091-abd1-25082ecf0de9-kube-api-access-hjsr4\") pod \"placement-438a-account-create-update-c6phc\" (UID: \"84ab8ffc-62a4-4091-abd1-25082ecf0de9\") " pod="openstack/placement-438a-account-create-update-c6phc" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.981374 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-nxql9"] Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.982336 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nxql9" Jan 28 10:00:03 crc kubenswrapper[4735]: I0128 10:00:03.993644 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nxql9"] Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.028180 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2vzq6" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.028875 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64p5x\" (UniqueName: \"kubernetes.io/projected/e0a76ea5-21ec-4ee3-b611-2cbf66deb576-kube-api-access-64p5x\") pod \"glance-db-create-nxql9\" (UID: \"e0a76ea5-21ec-4ee3-b611-2cbf66deb576\") " pod="openstack/glance-db-create-nxql9" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.029053 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0a76ea5-21ec-4ee3-b611-2cbf66deb576-operator-scripts\") pod \"glance-db-create-nxql9\" (UID: \"e0a76ea5-21ec-4ee3-b611-2cbf66deb576\") " pod="openstack/glance-db-create-nxql9" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.129299 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-da82-account-create-update-wcmq6"] Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.130238 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0a76ea5-21ec-4ee3-b611-2cbf66deb576-operator-scripts\") pod \"glance-db-create-nxql9\" (UID: \"e0a76ea5-21ec-4ee3-b611-2cbf66deb576\") " pod="openstack/glance-db-create-nxql9" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.130342 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64p5x\" (UniqueName: \"kubernetes.io/projected/e0a76ea5-21ec-4ee3-b611-2cbf66deb576-kube-api-access-64p5x\") pod \"glance-db-create-nxql9\" (UID: \"e0a76ea5-21ec-4ee3-b611-2cbf66deb576\") " pod="openstack/glance-db-create-nxql9" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.130645 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-da82-account-create-update-wcmq6" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.131472 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0a76ea5-21ec-4ee3-b611-2cbf66deb576-operator-scripts\") pod \"glance-db-create-nxql9\" (UID: \"e0a76ea5-21ec-4ee3-b611-2cbf66deb576\") " pod="openstack/glance-db-create-nxql9" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.133568 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.148691 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-da82-account-create-update-wcmq6"] Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.171325 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64p5x\" (UniqueName: \"kubernetes.io/projected/e0a76ea5-21ec-4ee3-b611-2cbf66deb576-kube-api-access-64p5x\") pod \"glance-db-create-nxql9\" (UID: \"e0a76ea5-21ec-4ee3-b611-2cbf66deb576\") " pod="openstack/glance-db-create-nxql9" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.184449 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-438a-account-create-update-c6phc" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.233280 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gts7n\" (UniqueName: \"kubernetes.io/projected/1572001c-3eab-45ec-a976-6d5bac5fd3ad-kube-api-access-gts7n\") pod \"glance-da82-account-create-update-wcmq6\" (UID: \"1572001c-3eab-45ec-a976-6d5bac5fd3ad\") " pod="openstack/glance-da82-account-create-update-wcmq6" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.233384 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1572001c-3eab-45ec-a976-6d5bac5fd3ad-operator-scripts\") pod \"glance-da82-account-create-update-wcmq6\" (UID: \"1572001c-3eab-45ec-a976-6d5bac5fd3ad\") " pod="openstack/glance-da82-account-create-update-wcmq6" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.234266 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jn5hv"] Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.303898 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nxql9" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.317528 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c01-account-create-update-6dxpf"] Jan 28 10:00:04 crc kubenswrapper[4735]: W0128 10:00:04.319267 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ef60663_8c16_49a5_84ad_2d59c0972f72.slice/crio-15ed289c0bad26a2cbfe6498aaf70f9aa4431b8a333fefa807d0cf4980c11111 WatchSource:0}: Error finding container 15ed289c0bad26a2cbfe6498aaf70f9aa4431b8a333fefa807d0cf4980c11111: Status 404 returned error can't find the container with id 15ed289c0bad26a2cbfe6498aaf70f9aa4431b8a333fefa807d0cf4980c11111 Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.334711 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gts7n\" (UniqueName: \"kubernetes.io/projected/1572001c-3eab-45ec-a976-6d5bac5fd3ad-kube-api-access-gts7n\") pod \"glance-da82-account-create-update-wcmq6\" (UID: \"1572001c-3eab-45ec-a976-6d5bac5fd3ad\") " pod="openstack/glance-da82-account-create-update-wcmq6" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.334809 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1572001c-3eab-45ec-a976-6d5bac5fd3ad-operator-scripts\") pod \"glance-da82-account-create-update-wcmq6\" (UID: \"1572001c-3eab-45ec-a976-6d5bac5fd3ad\") " pod="openstack/glance-da82-account-create-update-wcmq6" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.337076 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1572001c-3eab-45ec-a976-6d5bac5fd3ad-operator-scripts\") pod \"glance-da82-account-create-update-wcmq6\" (UID: \"1572001c-3eab-45ec-a976-6d5bac5fd3ad\") " pod="openstack/glance-da82-account-create-update-wcmq6" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.360137 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gts7n\" (UniqueName: \"kubernetes.io/projected/1572001c-3eab-45ec-a976-6d5bac5fd3ad-kube-api-access-gts7n\") pod \"glance-da82-account-create-update-wcmq6\" (UID: \"1572001c-3eab-45ec-a976-6d5bac5fd3ad\") " pod="openstack/glance-da82-account-create-update-wcmq6" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.476812 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-da82-account-create-update-wcmq6" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.534380 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2vzq6"] Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.743148 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 10:00:04 crc kubenswrapper[4735]: E0128 10:00:04.743288 4735 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 10:00:04 crc kubenswrapper[4735]: E0128 10:00:04.743494 4735 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 10:00:04 crc kubenswrapper[4735]: E0128 10:00:04.743532 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift podName:718744c9-5139-48b6-bf2c-7a29489d0fa7 nodeName:}" failed. No retries permitted until 2026-01-28 10:00:12.743519268 +0000 UTC m=+1065.893183641 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift") pod "swift-storage-0" (UID: "718744c9-5139-48b6-bf2c-7a29489d0fa7") : configmap "swift-ring-files" not found Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.773553 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.777985 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jjxq7" Jan 28 10:00:04 crc kubenswrapper[4735]: W0128 10:00:04.800392 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84ab8ffc_62a4_4091_abd1_25082ecf0de9.slice/crio-644b7de6e2d56b1ae65548144131fd1683b566a8da612b3ce353ce44a475cd3f WatchSource:0}: Error finding container 644b7de6e2d56b1ae65548144131fd1683b566a8da612b3ce353ce44a475cd3f: Status 404 returned error can't find the container with id 644b7de6e2d56b1ae65548144131fd1683b566a8da612b3ce353ce44a475cd3f Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.808939 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-438a-account-create-update-c6phc"] Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.844215 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b969dfd-602d-47fb-a56e-f059eb358442-config-volume\") pod \"9b969dfd-602d-47fb-a56e-f059eb358442\" (UID: \"9b969dfd-602d-47fb-a56e-f059eb358442\") " Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.845102 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b969dfd-602d-47fb-a56e-f059eb358442-config-volume" (OuterVolumeSpecName: "config-volume") pod "9b969dfd-602d-47fb-a56e-f059eb358442" (UID: "9b969dfd-602d-47fb-a56e-f059eb358442"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.845866 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwn2w\" (UniqueName: \"kubernetes.io/projected/9b969dfd-602d-47fb-a56e-f059eb358442-kube-api-access-jwn2w\") pod \"9b969dfd-602d-47fb-a56e-f059eb358442\" (UID: \"9b969dfd-602d-47fb-a56e-f059eb358442\") " Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.846231 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0c3396c-aa5e-4763-a07f-8a05d5e72c9f-operator-scripts\") pod \"e0c3396c-aa5e-4763-a07f-8a05d5e72c9f\" (UID: \"e0c3396c-aa5e-4763-a07f-8a05d5e72c9f\") " Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.846434 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmstq\" (UniqueName: \"kubernetes.io/projected/e0c3396c-aa5e-4763-a07f-8a05d5e72c9f-kube-api-access-wmstq\") pod \"e0c3396c-aa5e-4763-a07f-8a05d5e72c9f\" (UID: \"e0c3396c-aa5e-4763-a07f-8a05d5e72c9f\") " Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.847054 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b969dfd-602d-47fb-a56e-f059eb358442-secret-volume\") pod \"9b969dfd-602d-47fb-a56e-f059eb358442\" (UID: \"9b969dfd-602d-47fb-a56e-f059eb358442\") " Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.848080 4735 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b969dfd-602d-47fb-a56e-f059eb358442-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.848355 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0c3396c-aa5e-4763-a07f-8a05d5e72c9f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0c3396c-aa5e-4763-a07f-8a05d5e72c9f" (UID: "e0c3396c-aa5e-4763-a07f-8a05d5e72c9f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.852399 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c3396c-aa5e-4763-a07f-8a05d5e72c9f-kube-api-access-wmstq" (OuterVolumeSpecName: "kube-api-access-wmstq") pod "e0c3396c-aa5e-4763-a07f-8a05d5e72c9f" (UID: "e0c3396c-aa5e-4763-a07f-8a05d5e72c9f"). InnerVolumeSpecName "kube-api-access-wmstq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.852405 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b969dfd-602d-47fb-a56e-f059eb358442-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9b969dfd-602d-47fb-a56e-f059eb358442" (UID: "9b969dfd-602d-47fb-a56e-f059eb358442"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.876186 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b969dfd-602d-47fb-a56e-f059eb358442-kube-api-access-jwn2w" (OuterVolumeSpecName: "kube-api-access-jwn2w") pod "9b969dfd-602d-47fb-a56e-f059eb358442" (UID: "9b969dfd-602d-47fb-a56e-f059eb358442"). InnerVolumeSpecName "kube-api-access-jwn2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.913450 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nxql9"] Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.950145 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0c3396c-aa5e-4763-a07f-8a05d5e72c9f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.950172 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmstq\" (UniqueName: \"kubernetes.io/projected/e0c3396c-aa5e-4763-a07f-8a05d5e72c9f-kube-api-access-wmstq\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.950182 4735 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b969dfd-602d-47fb-a56e-f059eb358442-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.950191 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwn2w\" (UniqueName: \"kubernetes.io/projected/9b969dfd-602d-47fb-a56e-f059eb358442-kube-api-access-jwn2w\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:04 crc kubenswrapper[4735]: I0128 10:00:04.974692 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-da82-account-create-update-wcmq6"] Jan 28 10:00:04 crc kubenswrapper[4735]: W0128 10:00:04.989155 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1572001c_3eab_45ec_a976_6d5bac5fd3ad.slice/crio-b33039551cc814040eb346fccd9a1ab663be620a2366aa43ea962081912b9aec WatchSource:0}: Error finding container b33039551cc814040eb346fccd9a1ab663be620a2366aa43ea962081912b9aec: Status 404 returned error can't find the container with id b33039551cc814040eb346fccd9a1ab663be620a2366aa43ea962081912b9aec Jan 28 10:00:05 crc kubenswrapper[4735]: I0128 10:00:05.184095 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-da82-account-create-update-wcmq6" event={"ID":"1572001c-3eab-45ec-a976-6d5bac5fd3ad","Type":"ContainerStarted","Data":"b33039551cc814040eb346fccd9a1ab663be620a2366aa43ea962081912b9aec"} Jan 28 10:00:05 crc kubenswrapper[4735]: I0128 10:00:05.185096 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-438a-account-create-update-c6phc" event={"ID":"84ab8ffc-62a4-4091-abd1-25082ecf0de9","Type":"ContainerStarted","Data":"644b7de6e2d56b1ae65548144131fd1683b566a8da612b3ce353ce44a475cd3f"} Jan 28 10:00:05 crc kubenswrapper[4735]: I0128 10:00:05.186968 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" Jan 28 10:00:05 crc kubenswrapper[4735]: I0128 10:00:05.186984 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh" event={"ID":"9b969dfd-602d-47fb-a56e-f059eb358442","Type":"ContainerDied","Data":"86eb0777bae6d3e68f4434b9dbfe41a84fae8c37b3f4cdb29e93bb3a1aa68d5c"} Jan 28 10:00:05 crc kubenswrapper[4735]: I0128 10:00:05.187045 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86eb0777bae6d3e68f4434b9dbfe41a84fae8c37b3f4cdb29e93bb3a1aa68d5c" Jan 28 10:00:05 crc kubenswrapper[4735]: I0128 10:00:05.188113 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nxql9" event={"ID":"e0a76ea5-21ec-4ee3-b611-2cbf66deb576","Type":"ContainerStarted","Data":"a8340cdf782193106b284d1dfe45692cd8cf0955b4efde5bd419e1a0d19270de"} Jan 28 10:00:05 crc kubenswrapper[4735]: I0128 10:00:05.189906 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2vzq6" event={"ID":"794d2867-cc06-44ad-82a0-778faaaa98f3","Type":"ContainerStarted","Data":"be563d263e9f1cd3a4acd6c3270ae30d1592830897aee677497d2f3396155def"} Jan 28 10:00:05 crc kubenswrapper[4735]: I0128 10:00:05.192307 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jjxq7" event={"ID":"e0c3396c-aa5e-4763-a07f-8a05d5e72c9f","Type":"ContainerDied","Data":"8f82604ff215761dc2365fd4a43142e85dff4268069fd592cb6abf53c15ed0e6"} Jan 28 10:00:05 crc kubenswrapper[4735]: I0128 10:00:05.192340 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f82604ff215761dc2365fd4a43142e85dff4268069fd592cb6abf53c15ed0e6" Jan 28 10:00:05 crc kubenswrapper[4735]: I0128 10:00:05.192396 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jjxq7" Jan 28 10:00:05 crc kubenswrapper[4735]: I0128 10:00:05.202028 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c01-account-create-update-6dxpf" event={"ID":"5ef60663-8c16-49a5-84ad-2d59c0972f72","Type":"ContainerStarted","Data":"b9c46a777f818814f0497bbac3d769a44d8ac5d4695015f4852069a0a7773d23"} Jan 28 10:00:05 crc kubenswrapper[4735]: I0128 10:00:05.202068 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c01-account-create-update-6dxpf" event={"ID":"5ef60663-8c16-49a5-84ad-2d59c0972f72","Type":"ContainerStarted","Data":"15ed289c0bad26a2cbfe6498aaf70f9aa4431b8a333fefa807d0cf4980c11111"} Jan 28 10:00:05 crc kubenswrapper[4735]: I0128 10:00:05.209837 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jn5hv" event={"ID":"3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345","Type":"ContainerStarted","Data":"9df4d0227611fef25d1dcc9f51a25ac51563c6c5cb6a6dc814adf6d60795a305"} Jan 28 10:00:05 crc kubenswrapper[4735]: I0128 10:00:05.209885 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jn5hv" event={"ID":"3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345","Type":"ContainerStarted","Data":"cdfa1cc649de723a8c03a6d680430d9e59fb9471372162b537fdc2df2f6e5a79"} Jan 28 10:00:05 crc kubenswrapper[4735]: I0128 10:00:05.229230 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5c01-account-create-update-6dxpf" podStartSLOduration=2.229202362 podStartE2EDuration="2.229202362s" podCreationTimestamp="2026-01-28 10:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:00:05.223487159 +0000 UTC m=+1058.373151532" watchObservedRunningTime="2026-01-28 10:00:05.229202362 +0000 UTC m=+1058.378866765" Jan 28 10:00:05 crc kubenswrapper[4735]: I0128 10:00:05.252488 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-jn5hv" podStartSLOduration=2.252465287 podStartE2EDuration="2.252465287s" podCreationTimestamp="2026-01-28 10:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:00:05.238509706 +0000 UTC m=+1058.388174079" watchObservedRunningTime="2026-01-28 10:00:05.252465287 +0000 UTC m=+1058.402129670" Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.126259 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.231323 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6jwt4"] Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.231547 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" podUID="de9e100a-9fe1-49ac-82b2-cbf314f57ca2" containerName="dnsmasq-dns" containerID="cri-o://7466511f9bffa0806a9aef217eb1049ef1a21bbd388b928475408b8d2546790e" gracePeriod=10 Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.234502 4735 generic.go:334] "Generic (PLEG): container finished" podID="e0a76ea5-21ec-4ee3-b611-2cbf66deb576" containerID="9597e9d97e886aad3e5d76c23f7970dc5626550e01d83d5b77bdffd0357d3aad" exitCode=0 Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.234562 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nxql9" event={"ID":"e0a76ea5-21ec-4ee3-b611-2cbf66deb576","Type":"ContainerDied","Data":"9597e9d97e886aad3e5d76c23f7970dc5626550e01d83d5b77bdffd0357d3aad"} Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.237221 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2vzq6" event={"ID":"794d2867-cc06-44ad-82a0-778faaaa98f3","Type":"ContainerStarted","Data":"f364ec64471274d14a24cf866ae96a566a6f602c1ac7640271c80a37ed80819b"} Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.240230 4735 generic.go:334] "Generic (PLEG): container finished" podID="5ef60663-8c16-49a5-84ad-2d59c0972f72" containerID="b9c46a777f818814f0497bbac3d769a44d8ac5d4695015f4852069a0a7773d23" exitCode=0 Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.240301 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c01-account-create-update-6dxpf" event={"ID":"5ef60663-8c16-49a5-84ad-2d59c0972f72","Type":"ContainerDied","Data":"b9c46a777f818814f0497bbac3d769a44d8ac5d4695015f4852069a0a7773d23"} Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.241658 4735 generic.go:334] "Generic (PLEG): container finished" podID="3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345" containerID="9df4d0227611fef25d1dcc9f51a25ac51563c6c5cb6a6dc814adf6d60795a305" exitCode=0 Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.241712 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jn5hv" event={"ID":"3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345","Type":"ContainerDied","Data":"9df4d0227611fef25d1dcc9f51a25ac51563c6c5cb6a6dc814adf6d60795a305"} Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.250271 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-da82-account-create-update-wcmq6" event={"ID":"1572001c-3eab-45ec-a976-6d5bac5fd3ad","Type":"ContainerStarted","Data":"6f6a0e5d2e5c14a777885091632b2f1364b8221a51adf31f944b79deaf31fd4a"} Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.263468 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-438a-account-create-update-c6phc" event={"ID":"84ab8ffc-62a4-4091-abd1-25082ecf0de9","Type":"ContainerStarted","Data":"8a4a785933196133458b96110a7e874bafa8139105a869a3d9d6061a3240c8bb"} Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.286195 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-2vzq6" podStartSLOduration=3.28617236 podStartE2EDuration="3.28617236s" podCreationTimestamp="2026-01-28 10:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:00:06.276680522 +0000 UTC m=+1059.426344895" watchObservedRunningTime="2026-01-28 10:00:06.28617236 +0000 UTC m=+1059.435836733" Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.315822 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-da82-account-create-update-wcmq6" podStartSLOduration=2.315793604 podStartE2EDuration="2.315793604s" podCreationTimestamp="2026-01-28 10:00:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:00:06.306549133 +0000 UTC m=+1059.456213516" watchObservedRunningTime="2026-01-28 10:00:06.315793604 +0000 UTC m=+1059.465457987" Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.335761 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-438a-account-create-update-c6phc" podStartSLOduration=3.335725516 podStartE2EDuration="3.335725516s" podCreationTimestamp="2026-01-28 10:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:00:06.329871908 +0000 UTC m=+1059.479536291" watchObservedRunningTime="2026-01-28 10:00:06.335725516 +0000 UTC m=+1059.485389889" Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.671272 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.812274 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsqq7\" (UniqueName: \"kubernetes.io/projected/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-kube-api-access-nsqq7\") pod \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.812436 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-ovsdbserver-sb\") pod \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.812486 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-ovsdbserver-nb\") pod \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.812618 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-dns-svc\") pod \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.812946 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-config\") pod \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\" (UID: \"de9e100a-9fe1-49ac-82b2-cbf314f57ca2\") " Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.838661 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-kube-api-access-nsqq7" (OuterVolumeSpecName: "kube-api-access-nsqq7") pod "de9e100a-9fe1-49ac-82b2-cbf314f57ca2" (UID: "de9e100a-9fe1-49ac-82b2-cbf314f57ca2"). InnerVolumeSpecName "kube-api-access-nsqq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.856963 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "de9e100a-9fe1-49ac-82b2-cbf314f57ca2" (UID: "de9e100a-9fe1-49ac-82b2-cbf314f57ca2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.861675 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "de9e100a-9fe1-49ac-82b2-cbf314f57ca2" (UID: "de9e100a-9fe1-49ac-82b2-cbf314f57ca2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.864638 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-config" (OuterVolumeSpecName: "config") pod "de9e100a-9fe1-49ac-82b2-cbf314f57ca2" (UID: "de9e100a-9fe1-49ac-82b2-cbf314f57ca2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.889847 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "de9e100a-9fe1-49ac-82b2-cbf314f57ca2" (UID: "de9e100a-9fe1-49ac-82b2-cbf314f57ca2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.914893 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.914939 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsqq7\" (UniqueName: \"kubernetes.io/projected/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-kube-api-access-nsqq7\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.914959 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.914975 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:06 crc kubenswrapper[4735]: I0128 10:00:06.914992 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de9e100a-9fe1-49ac-82b2-cbf314f57ca2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.232018 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jjxq7"] Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.239073 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-jjxq7"] Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.272861 4735 generic.go:334] "Generic (PLEG): container finished" podID="794d2867-cc06-44ad-82a0-778faaaa98f3" containerID="f364ec64471274d14a24cf866ae96a566a6f602c1ac7640271c80a37ed80819b" exitCode=0 Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.272930 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2vzq6" event={"ID":"794d2867-cc06-44ad-82a0-778faaaa98f3","Type":"ContainerDied","Data":"f364ec64471274d14a24cf866ae96a566a6f602c1ac7640271c80a37ed80819b"} Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.274479 4735 generic.go:334] "Generic (PLEG): container finished" podID="1572001c-3eab-45ec-a976-6d5bac5fd3ad" containerID="6f6a0e5d2e5c14a777885091632b2f1364b8221a51adf31f944b79deaf31fd4a" exitCode=0 Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.274530 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-da82-account-create-update-wcmq6" event={"ID":"1572001c-3eab-45ec-a976-6d5bac5fd3ad","Type":"ContainerDied","Data":"6f6a0e5d2e5c14a777885091632b2f1364b8221a51adf31f944b79deaf31fd4a"} Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.275966 4735 generic.go:334] "Generic (PLEG): container finished" podID="84ab8ffc-62a4-4091-abd1-25082ecf0de9" containerID="8a4a785933196133458b96110a7e874bafa8139105a869a3d9d6061a3240c8bb" exitCode=0 Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.276067 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-438a-account-create-update-c6phc" event={"ID":"84ab8ffc-62a4-4091-abd1-25082ecf0de9","Type":"ContainerDied","Data":"8a4a785933196133458b96110a7e874bafa8139105a869a3d9d6061a3240c8bb"} Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.279454 4735 generic.go:334] "Generic (PLEG): container finished" podID="de9e100a-9fe1-49ac-82b2-cbf314f57ca2" containerID="7466511f9bffa0806a9aef217eb1049ef1a21bbd388b928475408b8d2546790e" exitCode=0 Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.279721 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.281101 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" event={"ID":"de9e100a-9fe1-49ac-82b2-cbf314f57ca2","Type":"ContainerDied","Data":"7466511f9bffa0806a9aef217eb1049ef1a21bbd388b928475408b8d2546790e"} Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.281315 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6jwt4" event={"ID":"de9e100a-9fe1-49ac-82b2-cbf314f57ca2","Type":"ContainerDied","Data":"9955e0a8b1a7729d910d24cb1283d949e826afe6b233f3a69d6018c462de8ab4"} Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.281348 4735 scope.go:117] "RemoveContainer" containerID="7466511f9bffa0806a9aef217eb1049ef1a21bbd388b928475408b8d2546790e" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.332037 4735 scope.go:117] "RemoveContainer" containerID="11266e2bf4d65e1d94a2cea23251c9c2117beed20d19551b9de2bcf292f83211" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.361143 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6jwt4"] Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.363528 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6jwt4"] Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.401360 4735 scope.go:117] "RemoveContainer" containerID="7466511f9bffa0806a9aef217eb1049ef1a21bbd388b928475408b8d2546790e" Jan 28 10:00:07 crc kubenswrapper[4735]: E0128 10:00:07.402463 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7466511f9bffa0806a9aef217eb1049ef1a21bbd388b928475408b8d2546790e\": container with ID starting with 7466511f9bffa0806a9aef217eb1049ef1a21bbd388b928475408b8d2546790e not found: ID does not exist" containerID="7466511f9bffa0806a9aef217eb1049ef1a21bbd388b928475408b8d2546790e" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.402506 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7466511f9bffa0806a9aef217eb1049ef1a21bbd388b928475408b8d2546790e"} err="failed to get container status \"7466511f9bffa0806a9aef217eb1049ef1a21bbd388b928475408b8d2546790e\": rpc error: code = NotFound desc = could not find container \"7466511f9bffa0806a9aef217eb1049ef1a21bbd388b928475408b8d2546790e\": container with ID starting with 7466511f9bffa0806a9aef217eb1049ef1a21bbd388b928475408b8d2546790e not found: ID does not exist" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.402524 4735 scope.go:117] "RemoveContainer" containerID="11266e2bf4d65e1d94a2cea23251c9c2117beed20d19551b9de2bcf292f83211" Jan 28 10:00:07 crc kubenswrapper[4735]: E0128 10:00:07.402698 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11266e2bf4d65e1d94a2cea23251c9c2117beed20d19551b9de2bcf292f83211\": container with ID starting with 11266e2bf4d65e1d94a2cea23251c9c2117beed20d19551b9de2bcf292f83211 not found: ID does not exist" containerID="11266e2bf4d65e1d94a2cea23251c9c2117beed20d19551b9de2bcf292f83211" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.402716 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11266e2bf4d65e1d94a2cea23251c9c2117beed20d19551b9de2bcf292f83211"} err="failed to get container status \"11266e2bf4d65e1d94a2cea23251c9c2117beed20d19551b9de2bcf292f83211\": rpc error: code = NotFound desc = could not find container \"11266e2bf4d65e1d94a2cea23251c9c2117beed20d19551b9de2bcf292f83211\": container with ID starting with 11266e2bf4d65e1d94a2cea23251c9c2117beed20d19551b9de2bcf292f83211 not found: ID does not exist" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.521526 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de9e100a-9fe1-49ac-82b2-cbf314f57ca2" path="/var/lib/kubelet/pods/de9e100a-9fe1-49ac-82b2-cbf314f57ca2/volumes" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.522612 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0c3396c-aa5e-4763-a07f-8a05d5e72c9f" path="/var/lib/kubelet/pods/e0c3396c-aa5e-4763-a07f-8a05d5e72c9f/volumes" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.549373 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nxql9" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.624999 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0a76ea5-21ec-4ee3-b611-2cbf66deb576-operator-scripts\") pod \"e0a76ea5-21ec-4ee3-b611-2cbf66deb576\" (UID: \"e0a76ea5-21ec-4ee3-b611-2cbf66deb576\") " Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.625166 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64p5x\" (UniqueName: \"kubernetes.io/projected/e0a76ea5-21ec-4ee3-b611-2cbf66deb576-kube-api-access-64p5x\") pod \"e0a76ea5-21ec-4ee3-b611-2cbf66deb576\" (UID: \"e0a76ea5-21ec-4ee3-b611-2cbf66deb576\") " Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.625591 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a76ea5-21ec-4ee3-b611-2cbf66deb576-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0a76ea5-21ec-4ee3-b611-2cbf66deb576" (UID: "e0a76ea5-21ec-4ee3-b611-2cbf66deb576"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.626079 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0a76ea5-21ec-4ee3-b611-2cbf66deb576-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.628996 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0a76ea5-21ec-4ee3-b611-2cbf66deb576-kube-api-access-64p5x" (OuterVolumeSpecName: "kube-api-access-64p5x") pod "e0a76ea5-21ec-4ee3-b611-2cbf66deb576" (UID: "e0a76ea5-21ec-4ee3-b611-2cbf66deb576"). InnerVolumeSpecName "kube-api-access-64p5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.718084 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jn5hv" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.724598 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c01-account-create-update-6dxpf" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.727819 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64p5x\" (UniqueName: \"kubernetes.io/projected/e0a76ea5-21ec-4ee3-b611-2cbf66deb576-kube-api-access-64p5x\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.829353 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnllg\" (UniqueName: \"kubernetes.io/projected/5ef60663-8c16-49a5-84ad-2d59c0972f72-kube-api-access-bnllg\") pod \"5ef60663-8c16-49a5-84ad-2d59c0972f72\" (UID: \"5ef60663-8c16-49a5-84ad-2d59c0972f72\") " Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.829447 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef60663-8c16-49a5-84ad-2d59c0972f72-operator-scripts\") pod \"5ef60663-8c16-49a5-84ad-2d59c0972f72\" (UID: \"5ef60663-8c16-49a5-84ad-2d59c0972f72\") " Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.829517 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f6zz\" (UniqueName: \"kubernetes.io/projected/3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345-kube-api-access-9f6zz\") pod \"3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345\" (UID: \"3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345\") " Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.829589 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345-operator-scripts\") pod \"3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345\" (UID: \"3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345\") " Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.830006 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345" (UID: "3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.830016 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ef60663-8c16-49a5-84ad-2d59c0972f72-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ef60663-8c16-49a5-84ad-2d59c0972f72" (UID: "5ef60663-8c16-49a5-84ad-2d59c0972f72"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.830468 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.830497 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ef60663-8c16-49a5-84ad-2d59c0972f72-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.832539 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345-kube-api-access-9f6zz" (OuterVolumeSpecName: "kube-api-access-9f6zz") pod "3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345" (UID: "3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345"). InnerVolumeSpecName "kube-api-access-9f6zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.833071 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ef60663-8c16-49a5-84ad-2d59c0972f72-kube-api-access-bnllg" (OuterVolumeSpecName: "kube-api-access-bnllg") pod "5ef60663-8c16-49a5-84ad-2d59c0972f72" (UID: "5ef60663-8c16-49a5-84ad-2d59c0972f72"). InnerVolumeSpecName "kube-api-access-bnllg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.932566 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnllg\" (UniqueName: \"kubernetes.io/projected/5ef60663-8c16-49a5-84ad-2d59c0972f72-kube-api-access-bnllg\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:07 crc kubenswrapper[4735]: I0128 10:00:07.932601 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f6zz\" (UniqueName: \"kubernetes.io/projected/3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345-kube-api-access-9f6zz\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.289938 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nxql9" event={"ID":"e0a76ea5-21ec-4ee3-b611-2cbf66deb576","Type":"ContainerDied","Data":"a8340cdf782193106b284d1dfe45692cd8cf0955b4efde5bd419e1a0d19270de"} Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.290331 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8340cdf782193106b284d1dfe45692cd8cf0955b4efde5bd419e1a0d19270de" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.289955 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nxql9" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.291362 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c01-account-create-update-6dxpf" event={"ID":"5ef60663-8c16-49a5-84ad-2d59c0972f72","Type":"ContainerDied","Data":"15ed289c0bad26a2cbfe6498aaf70f9aa4431b8a333fefa807d0cf4980c11111"} Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.291396 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15ed289c0bad26a2cbfe6498aaf70f9aa4431b8a333fefa807d0cf4980c11111" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.291432 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c01-account-create-update-6dxpf" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.292979 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jn5hv" event={"ID":"3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345","Type":"ContainerDied","Data":"cdfa1cc649de723a8c03a6d680430d9e59fb9471372162b537fdc2df2f6e5a79"} Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.293038 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jn5hv" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.293048 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdfa1cc649de723a8c03a6d680430d9e59fb9471372162b537fdc2df2f6e5a79" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.707639 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-da82-account-create-update-wcmq6" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.749673 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gts7n\" (UniqueName: \"kubernetes.io/projected/1572001c-3eab-45ec-a976-6d5bac5fd3ad-kube-api-access-gts7n\") pod \"1572001c-3eab-45ec-a976-6d5bac5fd3ad\" (UID: \"1572001c-3eab-45ec-a976-6d5bac5fd3ad\") " Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.749777 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1572001c-3eab-45ec-a976-6d5bac5fd3ad-operator-scripts\") pod \"1572001c-3eab-45ec-a976-6d5bac5fd3ad\" (UID: \"1572001c-3eab-45ec-a976-6d5bac5fd3ad\") " Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.751306 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1572001c-3eab-45ec-a976-6d5bac5fd3ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1572001c-3eab-45ec-a976-6d5bac5fd3ad" (UID: "1572001c-3eab-45ec-a976-6d5bac5fd3ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.756927 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1572001c-3eab-45ec-a976-6d5bac5fd3ad-kube-api-access-gts7n" (OuterVolumeSpecName: "kube-api-access-gts7n") pod "1572001c-3eab-45ec-a976-6d5bac5fd3ad" (UID: "1572001c-3eab-45ec-a976-6d5bac5fd3ad"). InnerVolumeSpecName "kube-api-access-gts7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.775643 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-438a-account-create-update-c6phc" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.784252 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2vzq6" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.850835 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r8kd\" (UniqueName: \"kubernetes.io/projected/794d2867-cc06-44ad-82a0-778faaaa98f3-kube-api-access-9r8kd\") pod \"794d2867-cc06-44ad-82a0-778faaaa98f3\" (UID: \"794d2867-cc06-44ad-82a0-778faaaa98f3\") " Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.850974 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjsr4\" (UniqueName: \"kubernetes.io/projected/84ab8ffc-62a4-4091-abd1-25082ecf0de9-kube-api-access-hjsr4\") pod \"84ab8ffc-62a4-4091-abd1-25082ecf0de9\" (UID: \"84ab8ffc-62a4-4091-abd1-25082ecf0de9\") " Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.851085 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84ab8ffc-62a4-4091-abd1-25082ecf0de9-operator-scripts\") pod \"84ab8ffc-62a4-4091-abd1-25082ecf0de9\" (UID: \"84ab8ffc-62a4-4091-abd1-25082ecf0de9\") " Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.851126 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/794d2867-cc06-44ad-82a0-778faaaa98f3-operator-scripts\") pod \"794d2867-cc06-44ad-82a0-778faaaa98f3\" (UID: \"794d2867-cc06-44ad-82a0-778faaaa98f3\") " Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.851538 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gts7n\" (UniqueName: \"kubernetes.io/projected/1572001c-3eab-45ec-a976-6d5bac5fd3ad-kube-api-access-gts7n\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.851558 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1572001c-3eab-45ec-a976-6d5bac5fd3ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.851731 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84ab8ffc-62a4-4091-abd1-25082ecf0de9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84ab8ffc-62a4-4091-abd1-25082ecf0de9" (UID: "84ab8ffc-62a4-4091-abd1-25082ecf0de9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.851829 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/794d2867-cc06-44ad-82a0-778faaaa98f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "794d2867-cc06-44ad-82a0-778faaaa98f3" (UID: "794d2867-cc06-44ad-82a0-778faaaa98f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.855406 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84ab8ffc-62a4-4091-abd1-25082ecf0de9-kube-api-access-hjsr4" (OuterVolumeSpecName: "kube-api-access-hjsr4") pod "84ab8ffc-62a4-4091-abd1-25082ecf0de9" (UID: "84ab8ffc-62a4-4091-abd1-25082ecf0de9"). InnerVolumeSpecName "kube-api-access-hjsr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.855687 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/794d2867-cc06-44ad-82a0-778faaaa98f3-kube-api-access-9r8kd" (OuterVolumeSpecName: "kube-api-access-9r8kd") pod "794d2867-cc06-44ad-82a0-778faaaa98f3" (UID: "794d2867-cc06-44ad-82a0-778faaaa98f3"). InnerVolumeSpecName "kube-api-access-9r8kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.953237 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r8kd\" (UniqueName: \"kubernetes.io/projected/794d2867-cc06-44ad-82a0-778faaaa98f3-kube-api-access-9r8kd\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.953299 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjsr4\" (UniqueName: \"kubernetes.io/projected/84ab8ffc-62a4-4091-abd1-25082ecf0de9-kube-api-access-hjsr4\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.953313 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84ab8ffc-62a4-4091-abd1-25082ecf0de9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:08 crc kubenswrapper[4735]: I0128 10:00:08.953326 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/794d2867-cc06-44ad-82a0-778faaaa98f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:09 crc kubenswrapper[4735]: I0128 10:00:09.304029 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-da82-account-create-update-wcmq6" event={"ID":"1572001c-3eab-45ec-a976-6d5bac5fd3ad","Type":"ContainerDied","Data":"b33039551cc814040eb346fccd9a1ab663be620a2366aa43ea962081912b9aec"} Jan 28 10:00:09 crc kubenswrapper[4735]: I0128 10:00:09.304350 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b33039551cc814040eb346fccd9a1ab663be620a2366aa43ea962081912b9aec" Jan 28 10:00:09 crc kubenswrapper[4735]: I0128 10:00:09.304080 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-da82-account-create-update-wcmq6" Jan 28 10:00:09 crc kubenswrapper[4735]: I0128 10:00:09.308178 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-438a-account-create-update-c6phc" Jan 28 10:00:09 crc kubenswrapper[4735]: I0128 10:00:09.308172 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-438a-account-create-update-c6phc" event={"ID":"84ab8ffc-62a4-4091-abd1-25082ecf0de9","Type":"ContainerDied","Data":"644b7de6e2d56b1ae65548144131fd1683b566a8da612b3ce353ce44a475cd3f"} Jan 28 10:00:09 crc kubenswrapper[4735]: I0128 10:00:09.308459 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="644b7de6e2d56b1ae65548144131fd1683b566a8da612b3ce353ce44a475cd3f" Jan 28 10:00:09 crc kubenswrapper[4735]: I0128 10:00:09.316682 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2vzq6" event={"ID":"794d2867-cc06-44ad-82a0-778faaaa98f3","Type":"ContainerDied","Data":"be563d263e9f1cd3a4acd6c3270ae30d1592830897aee677497d2f3396155def"} Jan 28 10:00:09 crc kubenswrapper[4735]: I0128 10:00:09.316739 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be563d263e9f1cd3a4acd6c3270ae30d1592830897aee677497d2f3396155def" Jan 28 10:00:09 crc kubenswrapper[4735]: I0128 10:00:09.316873 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2vzq6" Jan 28 10:00:09 crc kubenswrapper[4735]: I0128 10:00:09.319450 4735 generic.go:334] "Generic (PLEG): container finished" podID="8f9e278a-14f9-4162-acea-69717d616573" containerID="8b24bb5e4a3757e51f6635e0461cfba7b28affe4f65a65ee279b84b0642bcc06" exitCode=0 Jan 28 10:00:09 crc kubenswrapper[4735]: I0128 10:00:09.319508 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-h9q76" event={"ID":"8f9e278a-14f9-4162-acea-69717d616573","Type":"ContainerDied","Data":"8b24bb5e4a3757e51f6635e0461cfba7b28affe4f65a65ee279b84b0642bcc06"} Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.674670 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-h9q76" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.786028 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-swiftconf\") pod \"8f9e278a-14f9-4162-acea-69717d616573\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.786155 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2hmp\" (UniqueName: \"kubernetes.io/projected/8f9e278a-14f9-4162-acea-69717d616573-kube-api-access-g2hmp\") pod \"8f9e278a-14f9-4162-acea-69717d616573\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.786238 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-combined-ca-bundle\") pod \"8f9e278a-14f9-4162-acea-69717d616573\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.786272 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f9e278a-14f9-4162-acea-69717d616573-scripts\") pod \"8f9e278a-14f9-4162-acea-69717d616573\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.786353 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-dispersionconf\") pod \"8f9e278a-14f9-4162-acea-69717d616573\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.786416 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8f9e278a-14f9-4162-acea-69717d616573-etc-swift\") pod \"8f9e278a-14f9-4162-acea-69717d616573\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.787287 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8f9e278a-14f9-4162-acea-69717d616573-ring-data-devices\") pod \"8f9e278a-14f9-4162-acea-69717d616573\" (UID: \"8f9e278a-14f9-4162-acea-69717d616573\") " Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.787269 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f9e278a-14f9-4162-acea-69717d616573-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8f9e278a-14f9-4162-acea-69717d616573" (UID: "8f9e278a-14f9-4162-acea-69717d616573"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.787545 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f9e278a-14f9-4162-acea-69717d616573-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8f9e278a-14f9-4162-acea-69717d616573" (UID: "8f9e278a-14f9-4162-acea-69717d616573"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.787779 4735 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8f9e278a-14f9-4162-acea-69717d616573-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.787798 4735 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8f9e278a-14f9-4162-acea-69717d616573-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.791896 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f9e278a-14f9-4162-acea-69717d616573-kube-api-access-g2hmp" (OuterVolumeSpecName: "kube-api-access-g2hmp") pod "8f9e278a-14f9-4162-acea-69717d616573" (UID: "8f9e278a-14f9-4162-acea-69717d616573"). InnerVolumeSpecName "kube-api-access-g2hmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.801331 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8f9e278a-14f9-4162-acea-69717d616573" (UID: "8f9e278a-14f9-4162-acea-69717d616573"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.812518 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f9e278a-14f9-4162-acea-69717d616573" (UID: "8f9e278a-14f9-4162-acea-69717d616573"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.813280 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8f9e278a-14f9-4162-acea-69717d616573" (UID: "8f9e278a-14f9-4162-acea-69717d616573"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.826316 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f9e278a-14f9-4162-acea-69717d616573-scripts" (OuterVolumeSpecName: "scripts") pod "8f9e278a-14f9-4162-acea-69717d616573" (UID: "8f9e278a-14f9-4162-acea-69717d616573"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.867601 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-p89wb"] Jan 28 10:00:10 crc kubenswrapper[4735]: E0128 10:00:10.867889 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c3396c-aa5e-4763-a07f-8a05d5e72c9f" containerName="mariadb-account-create-update" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.867905 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c3396c-aa5e-4763-a07f-8a05d5e72c9f" containerName="mariadb-account-create-update" Jan 28 10:00:10 crc kubenswrapper[4735]: E0128 10:00:10.867921 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1572001c-3eab-45ec-a976-6d5bac5fd3ad" containerName="mariadb-account-create-update" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.867927 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1572001c-3eab-45ec-a976-6d5bac5fd3ad" containerName="mariadb-account-create-update" Jan 28 10:00:10 crc kubenswrapper[4735]: E0128 10:00:10.867936 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9e100a-9fe1-49ac-82b2-cbf314f57ca2" containerName="dnsmasq-dns" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.867943 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9e100a-9fe1-49ac-82b2-cbf314f57ca2" containerName="dnsmasq-dns" Jan 28 10:00:10 crc kubenswrapper[4735]: E0128 10:00:10.867956 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef60663-8c16-49a5-84ad-2d59c0972f72" containerName="mariadb-account-create-update" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.867961 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef60663-8c16-49a5-84ad-2d59c0972f72" containerName="mariadb-account-create-update" Jan 28 10:00:10 crc kubenswrapper[4735]: E0128 10:00:10.867972 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a76ea5-21ec-4ee3-b611-2cbf66deb576" containerName="mariadb-database-create" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.867977 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a76ea5-21ec-4ee3-b611-2cbf66deb576" containerName="mariadb-database-create" Jan 28 10:00:10 crc kubenswrapper[4735]: E0128 10:00:10.867988 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9e278a-14f9-4162-acea-69717d616573" containerName="swift-ring-rebalance" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.867993 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9e278a-14f9-4162-acea-69717d616573" containerName="swift-ring-rebalance" Jan 28 10:00:10 crc kubenswrapper[4735]: E0128 10:00:10.868003 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9e100a-9fe1-49ac-82b2-cbf314f57ca2" containerName="init" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.868008 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9e100a-9fe1-49ac-82b2-cbf314f57ca2" containerName="init" Jan 28 10:00:10 crc kubenswrapper[4735]: E0128 10:00:10.868018 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="794d2867-cc06-44ad-82a0-778faaaa98f3" containerName="mariadb-database-create" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.868023 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="794d2867-cc06-44ad-82a0-778faaaa98f3" containerName="mariadb-database-create" Jan 28 10:00:10 crc kubenswrapper[4735]: E0128 10:00:10.868034 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ab8ffc-62a4-4091-abd1-25082ecf0de9" containerName="mariadb-account-create-update" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.868040 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ab8ffc-62a4-4091-abd1-25082ecf0de9" containerName="mariadb-account-create-update" Jan 28 10:00:10 crc kubenswrapper[4735]: E0128 10:00:10.868052 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345" containerName="mariadb-database-create" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.868058 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345" containerName="mariadb-database-create" Jan 28 10:00:10 crc kubenswrapper[4735]: E0128 10:00:10.868069 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b969dfd-602d-47fb-a56e-f059eb358442" containerName="collect-profiles" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.868075 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b969dfd-602d-47fb-a56e-f059eb358442" containerName="collect-profiles" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.868202 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a76ea5-21ec-4ee3-b611-2cbf66deb576" containerName="mariadb-database-create" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.868215 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b969dfd-602d-47fb-a56e-f059eb358442" containerName="collect-profiles" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.868226 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ef60663-8c16-49a5-84ad-2d59c0972f72" containerName="mariadb-account-create-update" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.868234 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="de9e100a-9fe1-49ac-82b2-cbf314f57ca2" containerName="dnsmasq-dns" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.868243 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345" containerName="mariadb-database-create" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.868251 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="1572001c-3eab-45ec-a976-6d5bac5fd3ad" containerName="mariadb-account-create-update" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.868262 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f9e278a-14f9-4162-acea-69717d616573" containerName="swift-ring-rebalance" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.868269 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="84ab8ffc-62a4-4091-abd1-25082ecf0de9" containerName="mariadb-account-create-update" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.868281 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c3396c-aa5e-4763-a07f-8a05d5e72c9f" containerName="mariadb-account-create-update" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.868291 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="794d2867-cc06-44ad-82a0-778faaaa98f3" containerName="mariadb-database-create" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.868734 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p89wb" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.872186 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.877993 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p89wb"] Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.889325 4735 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.889362 4735 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.889375 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2hmp\" (UniqueName: \"kubernetes.io/projected/8f9e278a-14f9-4162-acea-69717d616573-kube-api-access-g2hmp\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.889387 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9e278a-14f9-4162-acea-69717d616573-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.889399 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f9e278a-14f9-4162-acea-69717d616573-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.990677 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdcf5\" (UniqueName: \"kubernetes.io/projected/ee903ef4-d589-4e2d-b0f7-02594a18ab0c-kube-api-access-tdcf5\") pod \"root-account-create-update-p89wb\" (UID: \"ee903ef4-d589-4e2d-b0f7-02594a18ab0c\") " pod="openstack/root-account-create-update-p89wb" Jan 28 10:00:10 crc kubenswrapper[4735]: I0128 10:00:10.991192 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee903ef4-d589-4e2d-b0f7-02594a18ab0c-operator-scripts\") pod \"root-account-create-update-p89wb\" (UID: \"ee903ef4-d589-4e2d-b0f7-02594a18ab0c\") " pod="openstack/root-account-create-update-p89wb" Jan 28 10:00:11 crc kubenswrapper[4735]: I0128 10:00:11.093063 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdcf5\" (UniqueName: \"kubernetes.io/projected/ee903ef4-d589-4e2d-b0f7-02594a18ab0c-kube-api-access-tdcf5\") pod \"root-account-create-update-p89wb\" (UID: \"ee903ef4-d589-4e2d-b0f7-02594a18ab0c\") " pod="openstack/root-account-create-update-p89wb" Jan 28 10:00:11 crc kubenswrapper[4735]: I0128 10:00:11.093254 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee903ef4-d589-4e2d-b0f7-02594a18ab0c-operator-scripts\") pod \"root-account-create-update-p89wb\" (UID: \"ee903ef4-d589-4e2d-b0f7-02594a18ab0c\") " pod="openstack/root-account-create-update-p89wb" Jan 28 10:00:11 crc kubenswrapper[4735]: I0128 10:00:11.094061 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee903ef4-d589-4e2d-b0f7-02594a18ab0c-operator-scripts\") pod \"root-account-create-update-p89wb\" (UID: \"ee903ef4-d589-4e2d-b0f7-02594a18ab0c\") " pod="openstack/root-account-create-update-p89wb" Jan 28 10:00:11 crc kubenswrapper[4735]: I0128 10:00:11.114679 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdcf5\" (UniqueName: \"kubernetes.io/projected/ee903ef4-d589-4e2d-b0f7-02594a18ab0c-kube-api-access-tdcf5\") pod \"root-account-create-update-p89wb\" (UID: \"ee903ef4-d589-4e2d-b0f7-02594a18ab0c\") " pod="openstack/root-account-create-update-p89wb" Jan 28 10:00:11 crc kubenswrapper[4735]: I0128 10:00:11.200564 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p89wb" Jan 28 10:00:11 crc kubenswrapper[4735]: I0128 10:00:11.344118 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-h9q76" event={"ID":"8f9e278a-14f9-4162-acea-69717d616573","Type":"ContainerDied","Data":"c56f1c059f780acf885ba6d991295c75b7cf3f44b8227093a98b6f0cf2348901"} Jan 28 10:00:11 crc kubenswrapper[4735]: I0128 10:00:11.344650 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c56f1c059f780acf885ba6d991295c75b7cf3f44b8227093a98b6f0cf2348901" Jan 28 10:00:11 crc kubenswrapper[4735]: I0128 10:00:11.344792 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-h9q76" Jan 28 10:00:11 crc kubenswrapper[4735]: I0128 10:00:11.629355 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p89wb"] Jan 28 10:00:11 crc kubenswrapper[4735]: W0128 10:00:11.632728 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee903ef4_d589_4e2d_b0f7_02594a18ab0c.slice/crio-15b96937f68664f1b62ab39b8fdb9df55dc4b36ec71675dc696cc94906ab7a7f WatchSource:0}: Error finding container 15b96937f68664f1b62ab39b8fdb9df55dc4b36ec71675dc696cc94906ab7a7f: Status 404 returned error can't find the container with id 15b96937f68664f1b62ab39b8fdb9df55dc4b36ec71675dc696cc94906ab7a7f Jan 28 10:00:12 crc kubenswrapper[4735]: I0128 10:00:12.358702 4735 generic.go:334] "Generic (PLEG): container finished" podID="ee903ef4-d589-4e2d-b0f7-02594a18ab0c" containerID="550cc2abfbd713b4a2ca9ed601f8b42d468f50ede096d6cd369190d22864ce5b" exitCode=0 Jan 28 10:00:12 crc kubenswrapper[4735]: I0128 10:00:12.358793 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p89wb" event={"ID":"ee903ef4-d589-4e2d-b0f7-02594a18ab0c","Type":"ContainerDied","Data":"550cc2abfbd713b4a2ca9ed601f8b42d468f50ede096d6cd369190d22864ce5b"} Jan 28 10:00:12 crc kubenswrapper[4735]: I0128 10:00:12.359103 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p89wb" event={"ID":"ee903ef4-d589-4e2d-b0f7-02594a18ab0c","Type":"ContainerStarted","Data":"15b96937f68664f1b62ab39b8fdb9df55dc4b36ec71675dc696cc94906ab7a7f"} Jan 28 10:00:12 crc kubenswrapper[4735]: I0128 10:00:12.823436 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 10:00:12 crc kubenswrapper[4735]: I0128 10:00:12.835735 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/718744c9-5139-48b6-bf2c-7a29489d0fa7-etc-swift\") pod \"swift-storage-0\" (UID: \"718744c9-5139-48b6-bf2c-7a29489d0fa7\") " pod="openstack/swift-storage-0" Jan 28 10:00:12 crc kubenswrapper[4735]: I0128 10:00:12.849444 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 10:00:13 crc kubenswrapper[4735]: I0128 10:00:13.413021 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 10:00:13 crc kubenswrapper[4735]: W0128 10:00:13.413673 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod718744c9_5139_48b6_bf2c_7a29489d0fa7.slice/crio-66becee7776eee0f948cc63342f2001abb23f6b312335f2f75c439695a079052 WatchSource:0}: Error finding container 66becee7776eee0f948cc63342f2001abb23f6b312335f2f75c439695a079052: Status 404 returned error can't find the container with id 66becee7776eee0f948cc63342f2001abb23f6b312335f2f75c439695a079052 Jan 28 10:00:13 crc kubenswrapper[4735]: I0128 10:00:13.644979 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p89wb" Jan 28 10:00:13 crc kubenswrapper[4735]: I0128 10:00:13.737788 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdcf5\" (UniqueName: \"kubernetes.io/projected/ee903ef4-d589-4e2d-b0f7-02594a18ab0c-kube-api-access-tdcf5\") pod \"ee903ef4-d589-4e2d-b0f7-02594a18ab0c\" (UID: \"ee903ef4-d589-4e2d-b0f7-02594a18ab0c\") " Jan 28 10:00:13 crc kubenswrapper[4735]: I0128 10:00:13.737906 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee903ef4-d589-4e2d-b0f7-02594a18ab0c-operator-scripts\") pod \"ee903ef4-d589-4e2d-b0f7-02594a18ab0c\" (UID: \"ee903ef4-d589-4e2d-b0f7-02594a18ab0c\") " Jan 28 10:00:13 crc kubenswrapper[4735]: I0128 10:00:13.738781 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee903ef4-d589-4e2d-b0f7-02594a18ab0c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee903ef4-d589-4e2d-b0f7-02594a18ab0c" (UID: "ee903ef4-d589-4e2d-b0f7-02594a18ab0c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:13 crc kubenswrapper[4735]: I0128 10:00:13.742710 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee903ef4-d589-4e2d-b0f7-02594a18ab0c-kube-api-access-tdcf5" (OuterVolumeSpecName: "kube-api-access-tdcf5") pod "ee903ef4-d589-4e2d-b0f7-02594a18ab0c" (UID: "ee903ef4-d589-4e2d-b0f7-02594a18ab0c"). InnerVolumeSpecName "kube-api-access-tdcf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:13 crc kubenswrapper[4735]: I0128 10:00:13.839211 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee903ef4-d589-4e2d-b0f7-02594a18ab0c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:13 crc kubenswrapper[4735]: I0128 10:00:13.839274 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdcf5\" (UniqueName: \"kubernetes.io/projected/ee903ef4-d589-4e2d-b0f7-02594a18ab0c-kube-api-access-tdcf5\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.264916 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-dfks4"] Jan 28 10:00:14 crc kubenswrapper[4735]: E0128 10:00:14.265280 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee903ef4-d589-4e2d-b0f7-02594a18ab0c" containerName="mariadb-account-create-update" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.265296 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee903ef4-d589-4e2d-b0f7-02594a18ab0c" containerName="mariadb-account-create-update" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.265457 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee903ef4-d589-4e2d-b0f7-02594a18ab0c" containerName="mariadb-account-create-update" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.266879 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dfks4" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.270080 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jmbxn" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.270122 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.282031 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dfks4"] Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.350697 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-db-sync-config-data\") pod \"glance-db-sync-dfks4\" (UID: \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\") " pod="openstack/glance-db-sync-dfks4" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.350968 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-config-data\") pod \"glance-db-sync-dfks4\" (UID: \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\") " pod="openstack/glance-db-sync-dfks4" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.351032 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-combined-ca-bundle\") pod \"glance-db-sync-dfks4\" (UID: \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\") " pod="openstack/glance-db-sync-dfks4" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.351288 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdx2z\" (UniqueName: \"kubernetes.io/projected/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-kube-api-access-xdx2z\") pod \"glance-db-sync-dfks4\" (UID: \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\") " pod="openstack/glance-db-sync-dfks4" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.377890 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"718744c9-5139-48b6-bf2c-7a29489d0fa7","Type":"ContainerStarted","Data":"66becee7776eee0f948cc63342f2001abb23f6b312335f2f75c439695a079052"} Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.379884 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p89wb" event={"ID":"ee903ef4-d589-4e2d-b0f7-02594a18ab0c","Type":"ContainerDied","Data":"15b96937f68664f1b62ab39b8fdb9df55dc4b36ec71675dc696cc94906ab7a7f"} Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.379922 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15b96937f68664f1b62ab39b8fdb9df55dc4b36ec71675dc696cc94906ab7a7f" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.380008 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p89wb" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.453309 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdx2z\" (UniqueName: \"kubernetes.io/projected/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-kube-api-access-xdx2z\") pod \"glance-db-sync-dfks4\" (UID: \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\") " pod="openstack/glance-db-sync-dfks4" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.453425 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-db-sync-config-data\") pod \"glance-db-sync-dfks4\" (UID: \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\") " pod="openstack/glance-db-sync-dfks4" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.453483 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-config-data\") pod \"glance-db-sync-dfks4\" (UID: \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\") " pod="openstack/glance-db-sync-dfks4" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.453507 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-combined-ca-bundle\") pod \"glance-db-sync-dfks4\" (UID: \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\") " pod="openstack/glance-db-sync-dfks4" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.458512 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-db-sync-config-data\") pod \"glance-db-sync-dfks4\" (UID: \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\") " pod="openstack/glance-db-sync-dfks4" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.458616 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-combined-ca-bundle\") pod \"glance-db-sync-dfks4\" (UID: \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\") " pod="openstack/glance-db-sync-dfks4" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.458867 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-config-data\") pod \"glance-db-sync-dfks4\" (UID: \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\") " pod="openstack/glance-db-sync-dfks4" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.471440 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdx2z\" (UniqueName: \"kubernetes.io/projected/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-kube-api-access-xdx2z\") pod \"glance-db-sync-dfks4\" (UID: \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\") " pod="openstack/glance-db-sync-dfks4" Jan 28 10:00:14 crc kubenswrapper[4735]: I0128 10:00:14.591046 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dfks4" Jan 28 10:00:15 crc kubenswrapper[4735]: I0128 10:00:15.059390 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dfks4"] Jan 28 10:00:15 crc kubenswrapper[4735]: W0128 10:00:15.076989 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fcc5331_6d94_4b66_9e5b_2bb8bced49dd.slice/crio-cc67eb09ccce81d27c3458ad888af970d24c422356694ce4b788e54564ce66a6 WatchSource:0}: Error finding container cc67eb09ccce81d27c3458ad888af970d24c422356694ce4b788e54564ce66a6: Status 404 returned error can't find the container with id cc67eb09ccce81d27c3458ad888af970d24c422356694ce4b788e54564ce66a6 Jan 28 10:00:15 crc kubenswrapper[4735]: I0128 10:00:15.393417 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"718744c9-5139-48b6-bf2c-7a29489d0fa7","Type":"ContainerStarted","Data":"88013918946ed608a66b3e4c12c7864f1c57c3ccb0699faebc637fc62e9e103b"} Jan 28 10:00:15 crc kubenswrapper[4735]: I0128 10:00:15.395760 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dfks4" event={"ID":"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd","Type":"ContainerStarted","Data":"cc67eb09ccce81d27c3458ad888af970d24c422356694ce4b788e54564ce66a6"} Jan 28 10:00:15 crc kubenswrapper[4735]: I0128 10:00:15.663910 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 28 10:00:16 crc kubenswrapper[4735]: I0128 10:00:16.411072 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"718744c9-5139-48b6-bf2c-7a29489d0fa7","Type":"ContainerStarted","Data":"16b3b680b2403cee13a4ea65fdd660c3dc3dc64f96be0397881c8f15c0c062b6"} Jan 28 10:00:16 crc kubenswrapper[4735]: I0128 10:00:16.411919 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"718744c9-5139-48b6-bf2c-7a29489d0fa7","Type":"ContainerStarted","Data":"331a1ff489fd16fb21f7422f1522227fd46d21e06da8e4bc65e84b4a737975ad"} Jan 28 10:00:17 crc kubenswrapper[4735]: I0128 10:00:17.257389 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-p89wb"] Jan 28 10:00:17 crc kubenswrapper[4735]: I0128 10:00:17.265617 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-p89wb"] Jan 28 10:00:17 crc kubenswrapper[4735]: I0128 10:00:17.519928 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee903ef4-d589-4e2d-b0f7-02594a18ab0c" path="/var/lib/kubelet/pods/ee903ef4-d589-4e2d-b0f7-02594a18ab0c/volumes" Jan 28 10:00:18 crc kubenswrapper[4735]: I0128 10:00:18.439881 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"718744c9-5139-48b6-bf2c-7a29489d0fa7","Type":"ContainerStarted","Data":"7d8a5a7d5a3ad1b44e52cc4d8033f32306c1ba69fd8d1d420ac278c81059c68f"} Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.128207 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rdcgm" podUID="21c583b0-22fd-4ccf-907a-1bb119119830" containerName="ovn-controller" probeResult="failure" output=< Jan 28 10:00:19 crc kubenswrapper[4735]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 10:00:19 crc kubenswrapper[4735]: > Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.167149 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.170099 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lh9fz" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.406662 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rdcgm-config-v5z5m"] Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.407624 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.409885 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.414453 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rdcgm-config-v5z5m"] Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.469951 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"718744c9-5139-48b6-bf2c-7a29489d0fa7","Type":"ContainerStarted","Data":"d334770ba33e6353bb1131deba0b3223317ade22a3b70c60310a05b65f3a1d66"} Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.544434 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxxrg\" (UniqueName: \"kubernetes.io/projected/d516eb57-a1ff-4a21-ae93-699e7257b943-kube-api-access-mxxrg\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.544502 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d516eb57-a1ff-4a21-ae93-699e7257b943-scripts\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.544804 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-run\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.545037 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-log-ovn\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.545111 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-run-ovn\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.545181 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d516eb57-a1ff-4a21-ae93-699e7257b943-additional-scripts\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.647077 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-run\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.647180 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-log-ovn\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.647217 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-run-ovn\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.647254 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d516eb57-a1ff-4a21-ae93-699e7257b943-additional-scripts\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.647283 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxxrg\" (UniqueName: \"kubernetes.io/projected/d516eb57-a1ff-4a21-ae93-699e7257b943-kube-api-access-mxxrg\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.647330 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d516eb57-a1ff-4a21-ae93-699e7257b943-scripts\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.648325 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-log-ovn\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.648324 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-run\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.648373 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-run-ovn\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.648905 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d516eb57-a1ff-4a21-ae93-699e7257b943-additional-scripts\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.650087 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d516eb57-a1ff-4a21-ae93-699e7257b943-scripts\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.666863 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxxrg\" (UniqueName: \"kubernetes.io/projected/d516eb57-a1ff-4a21-ae93-699e7257b943-kube-api-access-mxxrg\") pod \"ovn-controller-rdcgm-config-v5z5m\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:19 crc kubenswrapper[4735]: I0128 10:00:19.730139 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:20 crc kubenswrapper[4735]: I0128 10:00:20.060835 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rdcgm-config-v5z5m"] Jan 28 10:00:20 crc kubenswrapper[4735]: I0128 10:00:20.480000 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rdcgm-config-v5z5m" event={"ID":"d516eb57-a1ff-4a21-ae93-699e7257b943","Type":"ContainerStarted","Data":"24e45cdaa971a1e834c2b55e2121b5958f39c764e37963c8c405b9ecd00cdec4"} Jan 28 10:00:20 crc kubenswrapper[4735]: I0128 10:00:20.480491 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rdcgm-config-v5z5m" event={"ID":"d516eb57-a1ff-4a21-ae93-699e7257b943","Type":"ContainerStarted","Data":"91dd4868074d9f4ae6497e3ad951aca9d466c170600fec951f50ed0a46ebe16b"} Jan 28 10:00:20 crc kubenswrapper[4735]: I0128 10:00:20.485163 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"718744c9-5139-48b6-bf2c-7a29489d0fa7","Type":"ContainerStarted","Data":"f7d0f2323c8b5392433ec8478916bf1fd3d93e3fb7de0f652a510d957beed4c0"} Jan 28 10:00:20 crc kubenswrapper[4735]: I0128 10:00:20.485213 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"718744c9-5139-48b6-bf2c-7a29489d0fa7","Type":"ContainerStarted","Data":"cf679604efef9403387c4610e1c35141b61626fe4eb67bbd7b57efb28720fc24"} Jan 28 10:00:20 crc kubenswrapper[4735]: I0128 10:00:20.485226 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"718744c9-5139-48b6-bf2c-7a29489d0fa7","Type":"ContainerStarted","Data":"3840514a398d2f0aafd3cf5399d764968ee3ad79124fb2d0479b25490368729b"} Jan 28 10:00:20 crc kubenswrapper[4735]: I0128 10:00:20.497536 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rdcgm-config-v5z5m" podStartSLOduration=1.497517424 podStartE2EDuration="1.497517424s" podCreationTimestamp="2026-01-28 10:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:00:20.494903598 +0000 UTC m=+1073.644567971" watchObservedRunningTime="2026-01-28 10:00:20.497517424 +0000 UTC m=+1073.647181797" Jan 28 10:00:21 crc kubenswrapper[4735]: I0128 10:00:21.515103 4735 generic.go:334] "Generic (PLEG): container finished" podID="d516eb57-a1ff-4a21-ae93-699e7257b943" containerID="24e45cdaa971a1e834c2b55e2121b5958f39c764e37963c8c405b9ecd00cdec4" exitCode=0 Jan 28 10:00:21 crc kubenswrapper[4735]: I0128 10:00:21.521378 4735 generic.go:334] "Generic (PLEG): container finished" podID="348630cf-4549-4b7a-8784-9b1640f64c39" containerID="074772d4c557adbf7d1aa70279d0557705aeb23f4edcefca4e26366446086898" exitCode=0 Jan 28 10:00:21 crc kubenswrapper[4735]: I0128 10:00:21.523876 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rdcgm-config-v5z5m" event={"ID":"d516eb57-a1ff-4a21-ae93-699e7257b943","Type":"ContainerDied","Data":"24e45cdaa971a1e834c2b55e2121b5958f39c764e37963c8c405b9ecd00cdec4"} Jan 28 10:00:21 crc kubenswrapper[4735]: I0128 10:00:21.523919 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"348630cf-4549-4b7a-8784-9b1640f64c39","Type":"ContainerDied","Data":"074772d4c557adbf7d1aa70279d0557705aeb23f4edcefca4e26366446086898"} Jan 28 10:00:21 crc kubenswrapper[4735]: I0128 10:00:21.536273 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"718744c9-5139-48b6-bf2c-7a29489d0fa7","Type":"ContainerStarted","Data":"a3edc8290851fca2bbd43a8c6e1532be0537ae4f96b7c097f011af395c5265e8"} Jan 28 10:00:21 crc kubenswrapper[4735]: I0128 10:00:21.540056 4735 generic.go:334] "Generic (PLEG): container finished" podID="bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" containerID="eeb38ab05853e87edab775732f8845a1e0767277fe834036f74ece1c3d698a90" exitCode=0 Jan 28 10:00:21 crc kubenswrapper[4735]: I0128 10:00:21.540096 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1","Type":"ContainerDied","Data":"eeb38ab05853e87edab775732f8845a1e0767277fe834036f74ece1c3d698a90"} Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.280564 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-82wv7"] Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.282176 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-82wv7" Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.288818 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.312315 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-82wv7"] Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.389571 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db0a104c-3dab-4e49-b854-3aea57352423-operator-scripts\") pod \"root-account-create-update-82wv7\" (UID: \"db0a104c-3dab-4e49-b854-3aea57352423\") " pod="openstack/root-account-create-update-82wv7" Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.389681 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnd6l\" (UniqueName: \"kubernetes.io/projected/db0a104c-3dab-4e49-b854-3aea57352423-kube-api-access-gnd6l\") pod \"root-account-create-update-82wv7\" (UID: \"db0a104c-3dab-4e49-b854-3aea57352423\") " pod="openstack/root-account-create-update-82wv7" Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.491546 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnd6l\" (UniqueName: \"kubernetes.io/projected/db0a104c-3dab-4e49-b854-3aea57352423-kube-api-access-gnd6l\") pod \"root-account-create-update-82wv7\" (UID: \"db0a104c-3dab-4e49-b854-3aea57352423\") " pod="openstack/root-account-create-update-82wv7" Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.492026 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db0a104c-3dab-4e49-b854-3aea57352423-operator-scripts\") pod \"root-account-create-update-82wv7\" (UID: \"db0a104c-3dab-4e49-b854-3aea57352423\") " pod="openstack/root-account-create-update-82wv7" Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.493417 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db0a104c-3dab-4e49-b854-3aea57352423-operator-scripts\") pod \"root-account-create-update-82wv7\" (UID: \"db0a104c-3dab-4e49-b854-3aea57352423\") " pod="openstack/root-account-create-update-82wv7" Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.513512 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnd6l\" (UniqueName: \"kubernetes.io/projected/db0a104c-3dab-4e49-b854-3aea57352423-kube-api-access-gnd6l\") pod \"root-account-create-update-82wv7\" (UID: \"db0a104c-3dab-4e49-b854-3aea57352423\") " pod="openstack/root-account-create-update-82wv7" Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.562716 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"718744c9-5139-48b6-bf2c-7a29489d0fa7","Type":"ContainerStarted","Data":"9f1259fb33b7b5eaf50bb521f238ee173491d6c06a729d5089585fc56ff271cc"} Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.562843 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"718744c9-5139-48b6-bf2c-7a29489d0fa7","Type":"ContainerStarted","Data":"f381f7807d0cb68c542f61efd6ad0035ef6d13c579b2a28e0b336ae47099640c"} Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.562854 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"718744c9-5139-48b6-bf2c-7a29489d0fa7","Type":"ContainerStarted","Data":"394f9769e6d49614ab32eab774826e50ed2bd0a3f12b4b5ab1b824430ba6be97"} Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.562862 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"718744c9-5139-48b6-bf2c-7a29489d0fa7","Type":"ContainerStarted","Data":"cace6d74fe2583d9c708209d66221533d8a54ed3f7babab7fcb0df132e402e8a"} Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.562871 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"718744c9-5139-48b6-bf2c-7a29489d0fa7","Type":"ContainerStarted","Data":"faa9f3a38ea95f0ab73962e10177989fbf6df44a972b46eeaa65233165a21053"} Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.564816 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1","Type":"ContainerStarted","Data":"dc8885aabf041d87ed330c564d9ab0c0cb91f584fb1e34cfc9dd1bdf9720a910"} Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.566520 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.574707 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"348630cf-4549-4b7a-8784-9b1640f64c39","Type":"ContainerStarted","Data":"19c87e42426e4aab7dc47a874d6b63968d18d5fa84f1d4ee9e799491bfb458ea"} Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.575003 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.607909 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=50.497959052 podStartE2EDuration="1m3.60788325s" podCreationTimestamp="2026-01-28 09:59:19 +0000 UTC" firstStartedPulling="2026-01-28 09:59:32.937432719 +0000 UTC m=+1026.087097092" lastFinishedPulling="2026-01-28 09:59:46.047356917 +0000 UTC m=+1039.197021290" observedRunningTime="2026-01-28 10:00:22.601264874 +0000 UTC m=+1075.750929247" watchObservedRunningTime="2026-01-28 10:00:22.60788325 +0000 UTC m=+1075.757547633" Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.608372 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-82wv7" Jan 28 10:00:22 crc kubenswrapper[4735]: I0128 10:00:22.633620 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=50.835606067 podStartE2EDuration="1m3.633595917s" podCreationTimestamp="2026-01-28 09:59:19 +0000 UTC" firstStartedPulling="2026-01-28 09:59:33.701768684 +0000 UTC m=+1026.851433067" lastFinishedPulling="2026-01-28 09:59:46.499758544 +0000 UTC m=+1039.649422917" observedRunningTime="2026-01-28 10:00:22.628401886 +0000 UTC m=+1075.778066259" watchObservedRunningTime="2026-01-28 10:00:22.633595917 +0000 UTC m=+1075.783260290" Jan 28 10:00:24 crc kubenswrapper[4735]: I0128 10:00:24.149616 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-rdcgm" Jan 28 10:00:29 crc kubenswrapper[4735]: I0128 10:00:29.930466 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.032178 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-run-ovn\") pod \"d516eb57-a1ff-4a21-ae93-699e7257b943\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.032266 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d516eb57-a1ff-4a21-ae93-699e7257b943-additional-scripts\") pod \"d516eb57-a1ff-4a21-ae93-699e7257b943\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.032363 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d516eb57-a1ff-4a21-ae93-699e7257b943-scripts\") pod \"d516eb57-a1ff-4a21-ae93-699e7257b943\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.032355 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d516eb57-a1ff-4a21-ae93-699e7257b943" (UID: "d516eb57-a1ff-4a21-ae93-699e7257b943"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.032409 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxxrg\" (UniqueName: \"kubernetes.io/projected/d516eb57-a1ff-4a21-ae93-699e7257b943-kube-api-access-mxxrg\") pod \"d516eb57-a1ff-4a21-ae93-699e7257b943\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.032439 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-log-ovn\") pod \"d516eb57-a1ff-4a21-ae93-699e7257b943\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.032533 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-run\") pod \"d516eb57-a1ff-4a21-ae93-699e7257b943\" (UID: \"d516eb57-a1ff-4a21-ae93-699e7257b943\") " Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.032939 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d516eb57-a1ff-4a21-ae93-699e7257b943" (UID: "d516eb57-a1ff-4a21-ae93-699e7257b943"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.032998 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-run" (OuterVolumeSpecName: "var-run") pod "d516eb57-a1ff-4a21-ae93-699e7257b943" (UID: "d516eb57-a1ff-4a21-ae93-699e7257b943"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.033666 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d516eb57-a1ff-4a21-ae93-699e7257b943-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d516eb57-a1ff-4a21-ae93-699e7257b943" (UID: "d516eb57-a1ff-4a21-ae93-699e7257b943"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.033909 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d516eb57-a1ff-4a21-ae93-699e7257b943-scripts" (OuterVolumeSpecName: "scripts") pod "d516eb57-a1ff-4a21-ae93-699e7257b943" (UID: "d516eb57-a1ff-4a21-ae93-699e7257b943"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.034196 4735 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.034213 4735 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.034221 4735 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d516eb57-a1ff-4a21-ae93-699e7257b943-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.034229 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d516eb57-a1ff-4a21-ae93-699e7257b943-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.034236 4735 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d516eb57-a1ff-4a21-ae93-699e7257b943-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.038885 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d516eb57-a1ff-4a21-ae93-699e7257b943-kube-api-access-mxxrg" (OuterVolumeSpecName: "kube-api-access-mxxrg") pod "d516eb57-a1ff-4a21-ae93-699e7257b943" (UID: "d516eb57-a1ff-4a21-ae93-699e7257b943"). InnerVolumeSpecName "kube-api-access-mxxrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.135578 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxxrg\" (UniqueName: \"kubernetes.io/projected/d516eb57-a1ff-4a21-ae93-699e7257b943-kube-api-access-mxxrg\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.219488 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-82wv7"] Jan 28 10:00:30 crc kubenswrapper[4735]: W0128 10:00:30.228137 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb0a104c_3dab_4e49_b854_3aea57352423.slice/crio-7795e606018f36238cd0915a52d353023a27dde617e3078017393ec4e39137b6 WatchSource:0}: Error finding container 7795e606018f36238cd0915a52d353023a27dde617e3078017393ec4e39137b6: Status 404 returned error can't find the container with id 7795e606018f36238cd0915a52d353023a27dde617e3078017393ec4e39137b6 Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.232641 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.649618 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rdcgm-config-v5z5m" event={"ID":"d516eb57-a1ff-4a21-ae93-699e7257b943","Type":"ContainerDied","Data":"91dd4868074d9f4ae6497e3ad951aca9d466c170600fec951f50ed0a46ebe16b"} Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.649699 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91dd4868074d9f4ae6497e3ad951aca9d466c170600fec951f50ed0a46ebe16b" Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.649779 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rdcgm-config-v5z5m" Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.661069 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-82wv7" event={"ID":"db0a104c-3dab-4e49-b854-3aea57352423","Type":"ContainerStarted","Data":"21cc0ac2bc68b13798fc3f6ca713b1a7b1a409eb82d53c526de71b0aac5f828b"} Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.661115 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-82wv7" event={"ID":"db0a104c-3dab-4e49-b854-3aea57352423","Type":"ContainerStarted","Data":"7795e606018f36238cd0915a52d353023a27dde617e3078017393ec4e39137b6"} Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.662844 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dfks4" event={"ID":"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd","Type":"ContainerStarted","Data":"61e47de233e0f4cd2f29c46baeac05d93a4e2c926bb585a8fa7e15b3427378b1"} Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.670290 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"718744c9-5139-48b6-bf2c-7a29489d0fa7","Type":"ContainerStarted","Data":"4f1e24483330eac757f8c8dd619dd0bb6344eaececdf69d09eb4703a9800ba12"} Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.700885 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-dfks4" podStartSLOduration=1.9870707300000001 podStartE2EDuration="16.700850549s" podCreationTimestamp="2026-01-28 10:00:14 +0000 UTC" firstStartedPulling="2026-01-28 10:00:15.079481646 +0000 UTC m=+1068.229146019" lastFinishedPulling="2026-01-28 10:00:29.793261455 +0000 UTC m=+1082.942925838" observedRunningTime="2026-01-28 10:00:30.699091905 +0000 UTC m=+1083.848756278" watchObservedRunningTime="2026-01-28 10:00:30.700850549 +0000 UTC m=+1083.850514972" Jan 28 10:00:30 crc kubenswrapper[4735]: I0128 10:00:30.756290 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=28.114938771 podStartE2EDuration="35.756255452s" podCreationTimestamp="2026-01-28 09:59:55 +0000 UTC" firstStartedPulling="2026-01-28 10:00:13.415965028 +0000 UTC m=+1066.565629401" lastFinishedPulling="2026-01-28 10:00:21.057281709 +0000 UTC m=+1074.206946082" observedRunningTime="2026-01-28 10:00:30.746926577 +0000 UTC m=+1083.896590950" watchObservedRunningTime="2026-01-28 10:00:30.756255452 +0000 UTC m=+1083.905919875" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.015066 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.030001 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-fx77c"] Jan 28 10:00:31 crc kubenswrapper[4735]: E0128 10:00:31.030497 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d516eb57-a1ff-4a21-ae93-699e7257b943" containerName="ovn-config" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.030517 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="d516eb57-a1ff-4a21-ae93-699e7257b943" containerName="ovn-config" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.030725 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="d516eb57-a1ff-4a21-ae93-699e7257b943" containerName="ovn-config" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.031866 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.033822 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.052786 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-fx77c"] Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.137591 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rdcgm-config-v5z5m"] Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.151245 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rdcgm-config-v5z5m"] Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.155764 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-config\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.156109 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.156201 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.156281 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.156345 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.156405 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6s77\" (UniqueName: \"kubernetes.io/projected/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-kube-api-access-w6s77\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.257849 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.257931 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.257976 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.258012 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.258060 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6s77\" (UniqueName: \"kubernetes.io/projected/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-kube-api-access-w6s77\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.258136 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-config\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.258890 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.259163 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.259435 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-config\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.259508 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.259563 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.276198 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6s77\" (UniqueName: \"kubernetes.io/projected/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-kube-api-access-w6s77\") pod \"dnsmasq-dns-77585f5f8c-fx77c\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.360022 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.513268 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d516eb57-a1ff-4a21-ae93-699e7257b943" path="/var/lib/kubelet/pods/d516eb57-a1ff-4a21-ae93-699e7257b943/volumes" Jan 28 10:00:31 crc kubenswrapper[4735]: W0128 10:00:31.640987 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddacef5b4_a838_423f_8ba9_c7d56df3bdbe.slice/crio-772389bc94a0117118c7ab08ef9f20380e490163c8ad1ebde7ede0e1cdcc10a4 WatchSource:0}: Error finding container 772389bc94a0117118c7ab08ef9f20380e490163c8ad1ebde7ede0e1cdcc10a4: Status 404 returned error can't find the container with id 772389bc94a0117118c7ab08ef9f20380e490163c8ad1ebde7ede0e1cdcc10a4 Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.648915 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-fx77c"] Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.686306 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" event={"ID":"dacef5b4-a838-423f-8ba9-c7d56df3bdbe","Type":"ContainerStarted","Data":"772389bc94a0117118c7ab08ef9f20380e490163c8ad1ebde7ede0e1cdcc10a4"} Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.687917 4735 generic.go:334] "Generic (PLEG): container finished" podID="db0a104c-3dab-4e49-b854-3aea57352423" containerID="21cc0ac2bc68b13798fc3f6ca713b1a7b1a409eb82d53c526de71b0aac5f828b" exitCode=0 Jan 28 10:00:31 crc kubenswrapper[4735]: I0128 10:00:31.687965 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-82wv7" event={"ID":"db0a104c-3dab-4e49-b854-3aea57352423","Type":"ContainerDied","Data":"21cc0ac2bc68b13798fc3f6ca713b1a7b1a409eb82d53c526de71b0aac5f828b"} Jan 28 10:00:32 crc kubenswrapper[4735]: I0128 10:00:32.060547 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-82wv7" Jan 28 10:00:32 crc kubenswrapper[4735]: I0128 10:00:32.171055 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnd6l\" (UniqueName: \"kubernetes.io/projected/db0a104c-3dab-4e49-b854-3aea57352423-kube-api-access-gnd6l\") pod \"db0a104c-3dab-4e49-b854-3aea57352423\" (UID: \"db0a104c-3dab-4e49-b854-3aea57352423\") " Jan 28 10:00:32 crc kubenswrapper[4735]: I0128 10:00:32.171189 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db0a104c-3dab-4e49-b854-3aea57352423-operator-scripts\") pod \"db0a104c-3dab-4e49-b854-3aea57352423\" (UID: \"db0a104c-3dab-4e49-b854-3aea57352423\") " Jan 28 10:00:32 crc kubenswrapper[4735]: I0128 10:00:32.171908 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db0a104c-3dab-4e49-b854-3aea57352423-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db0a104c-3dab-4e49-b854-3aea57352423" (UID: "db0a104c-3dab-4e49-b854-3aea57352423"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:32 crc kubenswrapper[4735]: I0128 10:00:32.180332 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db0a104c-3dab-4e49-b854-3aea57352423-kube-api-access-gnd6l" (OuterVolumeSpecName: "kube-api-access-gnd6l") pod "db0a104c-3dab-4e49-b854-3aea57352423" (UID: "db0a104c-3dab-4e49-b854-3aea57352423"). InnerVolumeSpecName "kube-api-access-gnd6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:32 crc kubenswrapper[4735]: I0128 10:00:32.272858 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnd6l\" (UniqueName: \"kubernetes.io/projected/db0a104c-3dab-4e49-b854-3aea57352423-kube-api-access-gnd6l\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:32 crc kubenswrapper[4735]: I0128 10:00:32.272896 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db0a104c-3dab-4e49-b854-3aea57352423-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:32 crc kubenswrapper[4735]: I0128 10:00:32.698619 4735 generic.go:334] "Generic (PLEG): container finished" podID="dacef5b4-a838-423f-8ba9-c7d56df3bdbe" containerID="ecbe6fc5f59098a7348b7a1747cd57fb22b2d2f3342b006dd1f8678b7d15d553" exitCode=0 Jan 28 10:00:32 crc kubenswrapper[4735]: I0128 10:00:32.698721 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" event={"ID":"dacef5b4-a838-423f-8ba9-c7d56df3bdbe","Type":"ContainerDied","Data":"ecbe6fc5f59098a7348b7a1747cd57fb22b2d2f3342b006dd1f8678b7d15d553"} Jan 28 10:00:32 crc kubenswrapper[4735]: I0128 10:00:32.700903 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-82wv7" event={"ID":"db0a104c-3dab-4e49-b854-3aea57352423","Type":"ContainerDied","Data":"7795e606018f36238cd0915a52d353023a27dde617e3078017393ec4e39137b6"} Jan 28 10:00:32 crc kubenswrapper[4735]: I0128 10:00:32.700932 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7795e606018f36238cd0915a52d353023a27dde617e3078017393ec4e39137b6" Jan 28 10:00:32 crc kubenswrapper[4735]: I0128 10:00:32.700954 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-82wv7" Jan 28 10:00:33 crc kubenswrapper[4735]: I0128 10:00:33.712292 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" event={"ID":"dacef5b4-a838-423f-8ba9-c7d56df3bdbe","Type":"ContainerStarted","Data":"67e99522ca8fec0e27c50c3e49e62fd70dd14367035b462a005cd43103fcb13b"} Jan 28 10:00:33 crc kubenswrapper[4735]: I0128 10:00:33.712675 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:33 crc kubenswrapper[4735]: I0128 10:00:33.737537 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" podStartSLOduration=2.73751862 podStartE2EDuration="2.73751862s" podCreationTimestamp="2026-01-28 10:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:00:33.730662938 +0000 UTC m=+1086.880327321" watchObservedRunningTime="2026-01-28 10:00:33.73751862 +0000 UTC m=+1086.887182983" Jan 28 10:00:36 crc kubenswrapper[4735]: I0128 10:00:36.739775 4735 generic.go:334] "Generic (PLEG): container finished" podID="5fcc5331-6d94-4b66-9e5b-2bb8bced49dd" containerID="61e47de233e0f4cd2f29c46baeac05d93a4e2c926bb585a8fa7e15b3427378b1" exitCode=0 Jan 28 10:00:36 crc kubenswrapper[4735]: I0128 10:00:36.739849 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dfks4" event={"ID":"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd","Type":"ContainerDied","Data":"61e47de233e0f4cd2f29c46baeac05d93a4e2c926bb585a8fa7e15b3427378b1"} Jan 28 10:00:38 crc kubenswrapper[4735]: I0128 10:00:38.293364 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dfks4" Jan 28 10:00:38 crc kubenswrapper[4735]: I0128 10:00:38.368395 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-config-data\") pod \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\" (UID: \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\") " Jan 28 10:00:38 crc kubenswrapper[4735]: I0128 10:00:38.368454 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdx2z\" (UniqueName: \"kubernetes.io/projected/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-kube-api-access-xdx2z\") pod \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\" (UID: \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\") " Jan 28 10:00:38 crc kubenswrapper[4735]: I0128 10:00:38.368567 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-combined-ca-bundle\") pod \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\" (UID: \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\") " Jan 28 10:00:38 crc kubenswrapper[4735]: I0128 10:00:38.368584 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-db-sync-config-data\") pod \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\" (UID: \"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd\") " Jan 28 10:00:38 crc kubenswrapper[4735]: I0128 10:00:38.374617 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5fcc5331-6d94-4b66-9e5b-2bb8bced49dd" (UID: "5fcc5331-6d94-4b66-9e5b-2bb8bced49dd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:00:38 crc kubenswrapper[4735]: I0128 10:00:38.375139 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-kube-api-access-xdx2z" (OuterVolumeSpecName: "kube-api-access-xdx2z") pod "5fcc5331-6d94-4b66-9e5b-2bb8bced49dd" (UID: "5fcc5331-6d94-4b66-9e5b-2bb8bced49dd"). InnerVolumeSpecName "kube-api-access-xdx2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:38 crc kubenswrapper[4735]: I0128 10:00:38.395941 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fcc5331-6d94-4b66-9e5b-2bb8bced49dd" (UID: "5fcc5331-6d94-4b66-9e5b-2bb8bced49dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:00:38 crc kubenswrapper[4735]: I0128 10:00:38.412912 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-config-data" (OuterVolumeSpecName: "config-data") pod "5fcc5331-6d94-4b66-9e5b-2bb8bced49dd" (UID: "5fcc5331-6d94-4b66-9e5b-2bb8bced49dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:00:38 crc kubenswrapper[4735]: I0128 10:00:38.469902 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:38 crc kubenswrapper[4735]: I0128 10:00:38.469941 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdx2z\" (UniqueName: \"kubernetes.io/projected/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-kube-api-access-xdx2z\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:38 crc kubenswrapper[4735]: I0128 10:00:38.469952 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:38 crc kubenswrapper[4735]: I0128 10:00:38.469963 4735 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:38 crc kubenswrapper[4735]: I0128 10:00:38.758531 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dfks4" event={"ID":"5fcc5331-6d94-4b66-9e5b-2bb8bced49dd","Type":"ContainerDied","Data":"cc67eb09ccce81d27c3458ad888af970d24c422356694ce4b788e54564ce66a6"} Jan 28 10:00:38 crc kubenswrapper[4735]: I0128 10:00:38.758585 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc67eb09ccce81d27c3458ad888af970d24c422356694ce4b788e54564ce66a6" Jan 28 10:00:38 crc kubenswrapper[4735]: I0128 10:00:38.758890 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dfks4" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.172664 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-fx77c"] Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.173550 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" podUID="dacef5b4-a838-423f-8ba9-c7d56df3bdbe" containerName="dnsmasq-dns" containerID="cri-o://67e99522ca8fec0e27c50c3e49e62fd70dd14367035b462a005cd43103fcb13b" gracePeriod=10 Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.176713 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.208671 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-w2t8m"] Jan 28 10:00:39 crc kubenswrapper[4735]: E0128 10:00:39.209100 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db0a104c-3dab-4e49-b854-3aea57352423" containerName="mariadb-account-create-update" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.209419 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="db0a104c-3dab-4e49-b854-3aea57352423" containerName="mariadb-account-create-update" Jan 28 10:00:39 crc kubenswrapper[4735]: E0128 10:00:39.209452 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fcc5331-6d94-4b66-9e5b-2bb8bced49dd" containerName="glance-db-sync" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.209461 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fcc5331-6d94-4b66-9e5b-2bb8bced49dd" containerName="glance-db-sync" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.209647 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="db0a104c-3dab-4e49-b854-3aea57352423" containerName="mariadb-account-create-update" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.209669 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fcc5331-6d94-4b66-9e5b-2bb8bced49dd" containerName="glance-db-sync" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.210594 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.238727 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-w2t8m"] Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.284386 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.284460 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.284498 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-config\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.284525 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.284575 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7drc\" (UniqueName: \"kubernetes.io/projected/b8117919-6bc1-449e-b3b5-668a44463190-kube-api-access-p7drc\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.284608 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.385913 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.385994 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.386036 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-config\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.386065 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.386113 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7drc\" (UniqueName: \"kubernetes.io/projected/b8117919-6bc1-449e-b3b5-668a44463190-kube-api-access-p7drc\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.386148 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.386813 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.387033 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-config\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.387228 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.387341 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.387612 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.408359 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7drc\" (UniqueName: \"kubernetes.io/projected/b8117919-6bc1-449e-b3b5-668a44463190-kube-api-access-p7drc\") pod \"dnsmasq-dns-7ff5475cc9-w2t8m\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.543560 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.638453 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.693510 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-ovsdbserver-sb\") pod \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.693577 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-dns-swift-storage-0\") pod \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.693679 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-config\") pod \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.693768 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6s77\" (UniqueName: \"kubernetes.io/projected/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-kube-api-access-w6s77\") pod \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.693794 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-dns-svc\") pod \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.693814 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-ovsdbserver-nb\") pod \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\" (UID: \"dacef5b4-a838-423f-8ba9-c7d56df3bdbe\") " Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.699910 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-kube-api-access-w6s77" (OuterVolumeSpecName: "kube-api-access-w6s77") pod "dacef5b4-a838-423f-8ba9-c7d56df3bdbe" (UID: "dacef5b4-a838-423f-8ba9-c7d56df3bdbe"). InnerVolumeSpecName "kube-api-access-w6s77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.740433 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dacef5b4-a838-423f-8ba9-c7d56df3bdbe" (UID: "dacef5b4-a838-423f-8ba9-c7d56df3bdbe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.748784 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dacef5b4-a838-423f-8ba9-c7d56df3bdbe" (UID: "dacef5b4-a838-423f-8ba9-c7d56df3bdbe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.753279 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-config" (OuterVolumeSpecName: "config") pod "dacef5b4-a838-423f-8ba9-c7d56df3bdbe" (UID: "dacef5b4-a838-423f-8ba9-c7d56df3bdbe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.756014 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dacef5b4-a838-423f-8ba9-c7d56df3bdbe" (UID: "dacef5b4-a838-423f-8ba9-c7d56df3bdbe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.768549 4735 generic.go:334] "Generic (PLEG): container finished" podID="dacef5b4-a838-423f-8ba9-c7d56df3bdbe" containerID="67e99522ca8fec0e27c50c3e49e62fd70dd14367035b462a005cd43103fcb13b" exitCode=0 Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.768590 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" event={"ID":"dacef5b4-a838-423f-8ba9-c7d56df3bdbe","Type":"ContainerDied","Data":"67e99522ca8fec0e27c50c3e49e62fd70dd14367035b462a005cd43103fcb13b"} Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.768623 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.768642 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-fx77c" event={"ID":"dacef5b4-a838-423f-8ba9-c7d56df3bdbe","Type":"ContainerDied","Data":"772389bc94a0117118c7ab08ef9f20380e490163c8ad1ebde7ede0e1cdcc10a4"} Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.768663 4735 scope.go:117] "RemoveContainer" containerID="67e99522ca8fec0e27c50c3e49e62fd70dd14367035b462a005cd43103fcb13b" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.770399 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dacef5b4-a838-423f-8ba9-c7d56df3bdbe" (UID: "dacef5b4-a838-423f-8ba9-c7d56df3bdbe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.795721 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.795780 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.795793 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.795805 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6s77\" (UniqueName: \"kubernetes.io/projected/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-kube-api-access-w6s77\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.795817 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.795826 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dacef5b4-a838-423f-8ba9-c7d56df3bdbe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.799923 4735 scope.go:117] "RemoveContainer" containerID="ecbe6fc5f59098a7348b7a1747cd57fb22b2d2f3342b006dd1f8678b7d15d553" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.816079 4735 scope.go:117] "RemoveContainer" containerID="67e99522ca8fec0e27c50c3e49e62fd70dd14367035b462a005cd43103fcb13b" Jan 28 10:00:39 crc kubenswrapper[4735]: E0128 10:00:39.816474 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67e99522ca8fec0e27c50c3e49e62fd70dd14367035b462a005cd43103fcb13b\": container with ID starting with 67e99522ca8fec0e27c50c3e49e62fd70dd14367035b462a005cd43103fcb13b not found: ID does not exist" containerID="67e99522ca8fec0e27c50c3e49e62fd70dd14367035b462a005cd43103fcb13b" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.816535 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67e99522ca8fec0e27c50c3e49e62fd70dd14367035b462a005cd43103fcb13b"} err="failed to get container status \"67e99522ca8fec0e27c50c3e49e62fd70dd14367035b462a005cd43103fcb13b\": rpc error: code = NotFound desc = could not find container \"67e99522ca8fec0e27c50c3e49e62fd70dd14367035b462a005cd43103fcb13b\": container with ID starting with 67e99522ca8fec0e27c50c3e49e62fd70dd14367035b462a005cd43103fcb13b not found: ID does not exist" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.816572 4735 scope.go:117] "RemoveContainer" containerID="ecbe6fc5f59098a7348b7a1747cd57fb22b2d2f3342b006dd1f8678b7d15d553" Jan 28 10:00:39 crc kubenswrapper[4735]: E0128 10:00:39.816949 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecbe6fc5f59098a7348b7a1747cd57fb22b2d2f3342b006dd1f8678b7d15d553\": container with ID starting with ecbe6fc5f59098a7348b7a1747cd57fb22b2d2f3342b006dd1f8678b7d15d553 not found: ID does not exist" containerID="ecbe6fc5f59098a7348b7a1747cd57fb22b2d2f3342b006dd1f8678b7d15d553" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.816980 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecbe6fc5f59098a7348b7a1747cd57fb22b2d2f3342b006dd1f8678b7d15d553"} err="failed to get container status \"ecbe6fc5f59098a7348b7a1747cd57fb22b2d2f3342b006dd1f8678b7d15d553\": rpc error: code = NotFound desc = could not find container \"ecbe6fc5f59098a7348b7a1747cd57fb22b2d2f3342b006dd1f8678b7d15d553\": container with ID starting with ecbe6fc5f59098a7348b7a1747cd57fb22b2d2f3342b006dd1f8678b7d15d553 not found: ID does not exist" Jan 28 10:00:39 crc kubenswrapper[4735]: I0128 10:00:39.980465 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-w2t8m"] Jan 28 10:00:39 crc kubenswrapper[4735]: W0128 10:00:39.984730 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8117919_6bc1_449e_b3b5_668a44463190.slice/crio-b1964c0473d3118073b5013b648fdf4566d7cd494d4214a4ee0013f42ee1c87c WatchSource:0}: Error finding container b1964c0473d3118073b5013b648fdf4566d7cd494d4214a4ee0013f42ee1c87c: Status 404 returned error can't find the container with id b1964c0473d3118073b5013b648fdf4566d7cd494d4214a4ee0013f42ee1c87c Jan 28 10:00:40 crc kubenswrapper[4735]: I0128 10:00:40.106024 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-fx77c"] Jan 28 10:00:40 crc kubenswrapper[4735]: I0128 10:00:40.112807 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-fx77c"] Jan 28 10:00:40 crc kubenswrapper[4735]: I0128 10:00:40.717125 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 10:00:40 crc kubenswrapper[4735]: I0128 10:00:40.785422 4735 generic.go:334] "Generic (PLEG): container finished" podID="b8117919-6bc1-449e-b3b5-668a44463190" containerID="15241c7ef80de965929e4f426b789dda4f040afa4ca756b90fd1d4e5b934b205" exitCode=0 Jan 28 10:00:40 crc kubenswrapper[4735]: I0128 10:00:40.785761 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" event={"ID":"b8117919-6bc1-449e-b3b5-668a44463190","Type":"ContainerDied","Data":"15241c7ef80de965929e4f426b789dda4f040afa4ca756b90fd1d4e5b934b205"} Jan 28 10:00:40 crc kubenswrapper[4735]: I0128 10:00:40.786891 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" event={"ID":"b8117919-6bc1-449e-b3b5-668a44463190","Type":"ContainerStarted","Data":"b1964c0473d3118073b5013b648fdf4566d7cd494d4214a4ee0013f42ee1c87c"} Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.038257 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-l9h9d"] Jan 28 10:00:41 crc kubenswrapper[4735]: E0128 10:00:41.038629 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dacef5b4-a838-423f-8ba9-c7d56df3bdbe" containerName="dnsmasq-dns" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.038645 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="dacef5b4-a838-423f-8ba9-c7d56df3bdbe" containerName="dnsmasq-dns" Jan 28 10:00:41 crc kubenswrapper[4735]: E0128 10:00:41.038661 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dacef5b4-a838-423f-8ba9-c7d56df3bdbe" containerName="init" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.038667 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="dacef5b4-a838-423f-8ba9-c7d56df3bdbe" containerName="init" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.038876 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="dacef5b4-a838-423f-8ba9-c7d56df3bdbe" containerName="dnsmasq-dns" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.039382 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-l9h9d" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.048932 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-l9h9d"] Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.127628 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb2j8\" (UniqueName: \"kubernetes.io/projected/2b3ba296-5623-443d-9349-61c27620d525-kube-api-access-tb2j8\") pod \"cinder-db-create-l9h9d\" (UID: \"2b3ba296-5623-443d-9349-61c27620d525\") " pod="openstack/cinder-db-create-l9h9d" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.127690 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b3ba296-5623-443d-9349-61c27620d525-operator-scripts\") pod \"cinder-db-create-l9h9d\" (UID: \"2b3ba296-5623-443d-9349-61c27620d525\") " pod="openstack/cinder-db-create-l9h9d" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.129938 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-bpqdm"] Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.131134 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bpqdm"] Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.131216 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bpqdm" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.148670 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-41da-account-create-update-6w5pq"] Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.149609 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-41da-account-create-update-6w5pq" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.153015 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.160955 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-41da-account-create-update-6w5pq"] Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.229545 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph7tf\" (UniqueName: \"kubernetes.io/projected/3e71de2a-5efc-430d-bdff-7480ec0f7465-kube-api-access-ph7tf\") pod \"barbican-db-create-bpqdm\" (UID: \"3e71de2a-5efc-430d-bdff-7480ec0f7465\") " pod="openstack/barbican-db-create-bpqdm" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.229618 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb2j8\" (UniqueName: \"kubernetes.io/projected/2b3ba296-5623-443d-9349-61c27620d525-kube-api-access-tb2j8\") pod \"cinder-db-create-l9h9d\" (UID: \"2b3ba296-5623-443d-9349-61c27620d525\") " pod="openstack/cinder-db-create-l9h9d" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.229732 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b3ba296-5623-443d-9349-61c27620d525-operator-scripts\") pod \"cinder-db-create-l9h9d\" (UID: \"2b3ba296-5623-443d-9349-61c27620d525\") " pod="openstack/cinder-db-create-l9h9d" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.229905 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e71de2a-5efc-430d-bdff-7480ec0f7465-operator-scripts\") pod \"barbican-db-create-bpqdm\" (UID: \"3e71de2a-5efc-430d-bdff-7480ec0f7465\") " pod="openstack/barbican-db-create-bpqdm" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.230124 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fb5k\" (UniqueName: \"kubernetes.io/projected/9cb508b8-a325-46e1-9ddb-1ce941cd8d48-kube-api-access-2fb5k\") pod \"cinder-41da-account-create-update-6w5pq\" (UID: \"9cb508b8-a325-46e1-9ddb-1ce941cd8d48\") " pod="openstack/cinder-41da-account-create-update-6w5pq" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.230177 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cb508b8-a325-46e1-9ddb-1ce941cd8d48-operator-scripts\") pod \"cinder-41da-account-create-update-6w5pq\" (UID: \"9cb508b8-a325-46e1-9ddb-1ce941cd8d48\") " pod="openstack/cinder-41da-account-create-update-6w5pq" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.230523 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b3ba296-5623-443d-9349-61c27620d525-operator-scripts\") pod \"cinder-db-create-l9h9d\" (UID: \"2b3ba296-5623-443d-9349-61c27620d525\") " pod="openstack/cinder-db-create-l9h9d" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.240433 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-2d62-account-create-update-kh76t"] Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.247105 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2d62-account-create-update-kh76t" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.249376 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.254198 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2d62-account-create-update-kh76t"] Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.261591 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb2j8\" (UniqueName: \"kubernetes.io/projected/2b3ba296-5623-443d-9349-61c27620d525-kube-api-access-tb2j8\") pod \"cinder-db-create-l9h9d\" (UID: \"2b3ba296-5623-443d-9349-61c27620d525\") " pod="openstack/cinder-db-create-l9h9d" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.331456 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2whdf\" (UniqueName: \"kubernetes.io/projected/d65a3154-7da4-4c99-b8d1-e3015e6fd1fe-kube-api-access-2whdf\") pod \"barbican-2d62-account-create-update-kh76t\" (UID: \"d65a3154-7da4-4c99-b8d1-e3015e6fd1fe\") " pod="openstack/barbican-2d62-account-create-update-kh76t" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.331796 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d65a3154-7da4-4c99-b8d1-e3015e6fd1fe-operator-scripts\") pod \"barbican-2d62-account-create-update-kh76t\" (UID: \"d65a3154-7da4-4c99-b8d1-e3015e6fd1fe\") " pod="openstack/barbican-2d62-account-create-update-kh76t" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.331912 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e71de2a-5efc-430d-bdff-7480ec0f7465-operator-scripts\") pod \"barbican-db-create-bpqdm\" (UID: \"3e71de2a-5efc-430d-bdff-7480ec0f7465\") " pod="openstack/barbican-db-create-bpqdm" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.332113 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fb5k\" (UniqueName: \"kubernetes.io/projected/9cb508b8-a325-46e1-9ddb-1ce941cd8d48-kube-api-access-2fb5k\") pod \"cinder-41da-account-create-update-6w5pq\" (UID: \"9cb508b8-a325-46e1-9ddb-1ce941cd8d48\") " pod="openstack/cinder-41da-account-create-update-6w5pq" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.332172 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cb508b8-a325-46e1-9ddb-1ce941cd8d48-operator-scripts\") pod \"cinder-41da-account-create-update-6w5pq\" (UID: \"9cb508b8-a325-46e1-9ddb-1ce941cd8d48\") " pod="openstack/cinder-41da-account-create-update-6w5pq" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.332229 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph7tf\" (UniqueName: \"kubernetes.io/projected/3e71de2a-5efc-430d-bdff-7480ec0f7465-kube-api-access-ph7tf\") pod \"barbican-db-create-bpqdm\" (UID: \"3e71de2a-5efc-430d-bdff-7480ec0f7465\") " pod="openstack/barbican-db-create-bpqdm" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.332640 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e71de2a-5efc-430d-bdff-7480ec0f7465-operator-scripts\") pod \"barbican-db-create-bpqdm\" (UID: \"3e71de2a-5efc-430d-bdff-7480ec0f7465\") " pod="openstack/barbican-db-create-bpqdm" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.333103 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cb508b8-a325-46e1-9ddb-1ce941cd8d48-operator-scripts\") pod \"cinder-41da-account-create-update-6w5pq\" (UID: \"9cb508b8-a325-46e1-9ddb-1ce941cd8d48\") " pod="openstack/cinder-41da-account-create-update-6w5pq" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.350891 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fb5k\" (UniqueName: \"kubernetes.io/projected/9cb508b8-a325-46e1-9ddb-1ce941cd8d48-kube-api-access-2fb5k\") pod \"cinder-41da-account-create-update-6w5pq\" (UID: \"9cb508b8-a325-46e1-9ddb-1ce941cd8d48\") " pod="openstack/cinder-41da-account-create-update-6w5pq" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.351724 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph7tf\" (UniqueName: \"kubernetes.io/projected/3e71de2a-5efc-430d-bdff-7480ec0f7465-kube-api-access-ph7tf\") pod \"barbican-db-create-bpqdm\" (UID: \"3e71de2a-5efc-430d-bdff-7480ec0f7465\") " pod="openstack/barbican-db-create-bpqdm" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.367970 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-l9h9d" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.411164 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-c2m26"] Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.412380 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-c2m26" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.415217 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.415513 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pnjpl" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.415922 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.416692 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.424807 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-c2m26"] Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.433632 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2whdf\" (UniqueName: \"kubernetes.io/projected/d65a3154-7da4-4c99-b8d1-e3015e6fd1fe-kube-api-access-2whdf\") pod \"barbican-2d62-account-create-update-kh76t\" (UID: \"d65a3154-7da4-4c99-b8d1-e3015e6fd1fe\") " pod="openstack/barbican-2d62-account-create-update-kh76t" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.433690 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d65a3154-7da4-4c99-b8d1-e3015e6fd1fe-operator-scripts\") pod \"barbican-2d62-account-create-update-kh76t\" (UID: \"d65a3154-7da4-4c99-b8d1-e3015e6fd1fe\") " pod="openstack/barbican-2d62-account-create-update-kh76t" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.434704 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d65a3154-7da4-4c99-b8d1-e3015e6fd1fe-operator-scripts\") pod \"barbican-2d62-account-create-update-kh76t\" (UID: \"d65a3154-7da4-4c99-b8d1-e3015e6fd1fe\") " pod="openstack/barbican-2d62-account-create-update-kh76t" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.448841 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-djphf"] Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.450130 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-djphf" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.457286 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bpqdm" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.460732 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2whdf\" (UniqueName: \"kubernetes.io/projected/d65a3154-7da4-4c99-b8d1-e3015e6fd1fe-kube-api-access-2whdf\") pod \"barbican-2d62-account-create-update-kh76t\" (UID: \"d65a3154-7da4-4c99-b8d1-e3015e6fd1fe\") " pod="openstack/barbican-2d62-account-create-update-kh76t" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.462006 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-djphf"] Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.470244 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-41da-account-create-update-6w5pq" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.515634 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dacef5b4-a838-423f-8ba9-c7d56df3bdbe" path="/var/lib/kubelet/pods/dacef5b4-a838-423f-8ba9-c7d56df3bdbe/volumes" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.535534 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a278d191-6be1-4cc8-98a1-c78ecf19c1b8-operator-scripts\") pod \"neutron-db-create-djphf\" (UID: \"a278d191-6be1-4cc8-98a1-c78ecf19c1b8\") " pod="openstack/neutron-db-create-djphf" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.535577 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqfpm\" (UniqueName: \"kubernetes.io/projected/a278d191-6be1-4cc8-98a1-c78ecf19c1b8-kube-api-access-nqfpm\") pod \"neutron-db-create-djphf\" (UID: \"a278d191-6be1-4cc8-98a1-c78ecf19c1b8\") " pod="openstack/neutron-db-create-djphf" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.535621 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da69c572-110e-4caf-aa51-a86700885cb5-combined-ca-bundle\") pod \"keystone-db-sync-c2m26\" (UID: \"da69c572-110e-4caf-aa51-a86700885cb5\") " pod="openstack/keystone-db-sync-c2m26" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.535649 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da69c572-110e-4caf-aa51-a86700885cb5-config-data\") pod \"keystone-db-sync-c2m26\" (UID: \"da69c572-110e-4caf-aa51-a86700885cb5\") " pod="openstack/keystone-db-sync-c2m26" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.535678 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjxr7\" (UniqueName: \"kubernetes.io/projected/da69c572-110e-4caf-aa51-a86700885cb5-kube-api-access-gjxr7\") pod \"keystone-db-sync-c2m26\" (UID: \"da69c572-110e-4caf-aa51-a86700885cb5\") " pod="openstack/keystone-db-sync-c2m26" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.539152 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-a22f-account-create-update-5jl8s"] Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.540365 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a22f-account-create-update-5jl8s" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.543217 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.566304 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2d62-account-create-update-kh76t" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.572087 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a22f-account-create-update-5jl8s"] Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.638189 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmqmk\" (UniqueName: \"kubernetes.io/projected/9eeb0ff9-4319-4e0b-82d8-07d523060b9f-kube-api-access-rmqmk\") pod \"neutron-a22f-account-create-update-5jl8s\" (UID: \"9eeb0ff9-4319-4e0b-82d8-07d523060b9f\") " pod="openstack/neutron-a22f-account-create-update-5jl8s" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.638517 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a278d191-6be1-4cc8-98a1-c78ecf19c1b8-operator-scripts\") pod \"neutron-db-create-djphf\" (UID: \"a278d191-6be1-4cc8-98a1-c78ecf19c1b8\") " pod="openstack/neutron-db-create-djphf" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.638545 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfpm\" (UniqueName: \"kubernetes.io/projected/a278d191-6be1-4cc8-98a1-c78ecf19c1b8-kube-api-access-nqfpm\") pod \"neutron-db-create-djphf\" (UID: \"a278d191-6be1-4cc8-98a1-c78ecf19c1b8\") " pod="openstack/neutron-db-create-djphf" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.638586 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eeb0ff9-4319-4e0b-82d8-07d523060b9f-operator-scripts\") pod \"neutron-a22f-account-create-update-5jl8s\" (UID: \"9eeb0ff9-4319-4e0b-82d8-07d523060b9f\") " pod="openstack/neutron-a22f-account-create-update-5jl8s" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.638610 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da69c572-110e-4caf-aa51-a86700885cb5-combined-ca-bundle\") pod \"keystone-db-sync-c2m26\" (UID: \"da69c572-110e-4caf-aa51-a86700885cb5\") " pod="openstack/keystone-db-sync-c2m26" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.638630 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da69c572-110e-4caf-aa51-a86700885cb5-config-data\") pod \"keystone-db-sync-c2m26\" (UID: \"da69c572-110e-4caf-aa51-a86700885cb5\") " pod="openstack/keystone-db-sync-c2m26" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.638664 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjxr7\" (UniqueName: \"kubernetes.io/projected/da69c572-110e-4caf-aa51-a86700885cb5-kube-api-access-gjxr7\") pod \"keystone-db-sync-c2m26\" (UID: \"da69c572-110e-4caf-aa51-a86700885cb5\") " pod="openstack/keystone-db-sync-c2m26" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.639624 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a278d191-6be1-4cc8-98a1-c78ecf19c1b8-operator-scripts\") pod \"neutron-db-create-djphf\" (UID: \"a278d191-6be1-4cc8-98a1-c78ecf19c1b8\") " pod="openstack/neutron-db-create-djphf" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.647204 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da69c572-110e-4caf-aa51-a86700885cb5-config-data\") pod \"keystone-db-sync-c2m26\" (UID: \"da69c572-110e-4caf-aa51-a86700885cb5\") " pod="openstack/keystone-db-sync-c2m26" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.651156 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da69c572-110e-4caf-aa51-a86700885cb5-combined-ca-bundle\") pod \"keystone-db-sync-c2m26\" (UID: \"da69c572-110e-4caf-aa51-a86700885cb5\") " pod="openstack/keystone-db-sync-c2m26" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.662129 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjxr7\" (UniqueName: \"kubernetes.io/projected/da69c572-110e-4caf-aa51-a86700885cb5-kube-api-access-gjxr7\") pod \"keystone-db-sync-c2m26\" (UID: \"da69c572-110e-4caf-aa51-a86700885cb5\") " pod="openstack/keystone-db-sync-c2m26" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.662309 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqfpm\" (UniqueName: \"kubernetes.io/projected/a278d191-6be1-4cc8-98a1-c78ecf19c1b8-kube-api-access-nqfpm\") pod \"neutron-db-create-djphf\" (UID: \"a278d191-6be1-4cc8-98a1-c78ecf19c1b8\") " pod="openstack/neutron-db-create-djphf" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.740893 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmqmk\" (UniqueName: \"kubernetes.io/projected/9eeb0ff9-4319-4e0b-82d8-07d523060b9f-kube-api-access-rmqmk\") pod \"neutron-a22f-account-create-update-5jl8s\" (UID: \"9eeb0ff9-4319-4e0b-82d8-07d523060b9f\") " pod="openstack/neutron-a22f-account-create-update-5jl8s" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.740997 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eeb0ff9-4319-4e0b-82d8-07d523060b9f-operator-scripts\") pod \"neutron-a22f-account-create-update-5jl8s\" (UID: \"9eeb0ff9-4319-4e0b-82d8-07d523060b9f\") " pod="openstack/neutron-a22f-account-create-update-5jl8s" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.741880 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eeb0ff9-4319-4e0b-82d8-07d523060b9f-operator-scripts\") pod \"neutron-a22f-account-create-update-5jl8s\" (UID: \"9eeb0ff9-4319-4e0b-82d8-07d523060b9f\") " pod="openstack/neutron-a22f-account-create-update-5jl8s" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.760948 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmqmk\" (UniqueName: \"kubernetes.io/projected/9eeb0ff9-4319-4e0b-82d8-07d523060b9f-kube-api-access-rmqmk\") pod \"neutron-a22f-account-create-update-5jl8s\" (UID: \"9eeb0ff9-4319-4e0b-82d8-07d523060b9f\") " pod="openstack/neutron-a22f-account-create-update-5jl8s" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.800673 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" event={"ID":"b8117919-6bc1-449e-b3b5-668a44463190","Type":"ContainerStarted","Data":"e125a8a7b38b5cee9070bb72b8256531f85f9205068731bcb8473f3617d1b708"} Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.800967 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.817211 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-c2m26" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.822135 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" podStartSLOduration=2.822113497 podStartE2EDuration="2.822113497s" podCreationTimestamp="2026-01-28 10:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:00:41.820120867 +0000 UTC m=+1094.969785240" watchObservedRunningTime="2026-01-28 10:00:41.822113497 +0000 UTC m=+1094.971777870" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.832245 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-djphf" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.874930 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a22f-account-create-update-5jl8s" Jan 28 10:00:41 crc kubenswrapper[4735]: I0128 10:00:41.898053 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-l9h9d"] Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.028944 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-41da-account-create-update-6w5pq"] Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.071399 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bpqdm"] Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.137501 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2d62-account-create-update-kh76t"] Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.325913 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-djphf"] Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.340377 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-c2m26"] Jan 28 10:00:42 crc kubenswrapper[4735]: W0128 10:00:42.346072 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda69c572_110e_4caf_aa51_a86700885cb5.slice/crio-67fe3b56ebd1941aa9f822f5c0962f56478e42ceda1a0fc7c49871ce5f5026bf WatchSource:0}: Error finding container 67fe3b56ebd1941aa9f822f5c0962f56478e42ceda1a0fc7c49871ce5f5026bf: Status 404 returned error can't find the container with id 67fe3b56ebd1941aa9f822f5c0962f56478e42ceda1a0fc7c49871ce5f5026bf Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.450580 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a22f-account-create-update-5jl8s"] Jan 28 10:00:42 crc kubenswrapper[4735]: W0128 10:00:42.489424 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eeb0ff9_4319_4e0b_82d8_07d523060b9f.slice/crio-19ee0690030b341a6d80ffc7ff237cc173b3312c8ed83907c2c114dea081895b WatchSource:0}: Error finding container 19ee0690030b341a6d80ffc7ff237cc173b3312c8ed83907c2c114dea081895b: Status 404 returned error can't find the container with id 19ee0690030b341a6d80ffc7ff237cc173b3312c8ed83907c2c114dea081895b Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.816709 4735 generic.go:334] "Generic (PLEG): container finished" podID="3e71de2a-5efc-430d-bdff-7480ec0f7465" containerID="a8dac7c19ff73dced448db8cf9a81e51cea1bafd2bff8c7dbb8287e3a1f5b880" exitCode=0 Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.816767 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bpqdm" event={"ID":"3e71de2a-5efc-430d-bdff-7480ec0f7465","Type":"ContainerDied","Data":"a8dac7c19ff73dced448db8cf9a81e51cea1bafd2bff8c7dbb8287e3a1f5b880"} Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.816823 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bpqdm" event={"ID":"3e71de2a-5efc-430d-bdff-7480ec0f7465","Type":"ContainerStarted","Data":"59813ff67a83f9d5166477ac652a72292d8c6657d41a4ae904e719327ca667a3"} Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.821854 4735 generic.go:334] "Generic (PLEG): container finished" podID="2b3ba296-5623-443d-9349-61c27620d525" containerID="3e1de117adec77fcdc3e6f93df39d57d236fa661ddb74f58855da429ef323836" exitCode=0 Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.821934 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-l9h9d" event={"ID":"2b3ba296-5623-443d-9349-61c27620d525","Type":"ContainerDied","Data":"3e1de117adec77fcdc3e6f93df39d57d236fa661ddb74f58855da429ef323836"} Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.821967 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-l9h9d" event={"ID":"2b3ba296-5623-443d-9349-61c27620d525","Type":"ContainerStarted","Data":"db9ea8befc838b8ad3d55fac7a45de25d9d324df243f3fa0f8701733695f3b13"} Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.826815 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-djphf" event={"ID":"a278d191-6be1-4cc8-98a1-c78ecf19c1b8","Type":"ContainerStarted","Data":"e0fe4545364e8c14510c4a06bd6dc6181f116bec80d6a79a9b06d0fbdab77f8f"} Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.826845 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-djphf" event={"ID":"a278d191-6be1-4cc8-98a1-c78ecf19c1b8","Type":"ContainerStarted","Data":"d0db1acc961b7c24ff710b698a5b9d97210d9a92a4e157d7105484ac0bb8c7ed"} Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.828495 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a22f-account-create-update-5jl8s" event={"ID":"9eeb0ff9-4319-4e0b-82d8-07d523060b9f","Type":"ContainerStarted","Data":"19ee0690030b341a6d80ffc7ff237cc173b3312c8ed83907c2c114dea081895b"} Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.833804 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-c2m26" event={"ID":"da69c572-110e-4caf-aa51-a86700885cb5","Type":"ContainerStarted","Data":"67fe3b56ebd1941aa9f822f5c0962f56478e42ceda1a0fc7c49871ce5f5026bf"} Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.837161 4735 generic.go:334] "Generic (PLEG): container finished" podID="9cb508b8-a325-46e1-9ddb-1ce941cd8d48" containerID="497ab24f85b5899a2648bbde5d46258506f7c920dd34e35cee162159bab75718" exitCode=0 Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.837251 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-41da-account-create-update-6w5pq" event={"ID":"9cb508b8-a325-46e1-9ddb-1ce941cd8d48","Type":"ContainerDied","Data":"497ab24f85b5899a2648bbde5d46258506f7c920dd34e35cee162159bab75718"} Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.837279 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-41da-account-create-update-6w5pq" event={"ID":"9cb508b8-a325-46e1-9ddb-1ce941cd8d48","Type":"ContainerStarted","Data":"ca2c49bb002fbe19e8ff802b826648ee732a5a14c36667c412b6212abec72cd5"} Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.839065 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2d62-account-create-update-kh76t" event={"ID":"d65a3154-7da4-4c99-b8d1-e3015e6fd1fe","Type":"ContainerStarted","Data":"59818cedee9a7d9e36eb957d4d37c122583db2a0ea724fe50d39cfc2ed0850fe"} Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.839108 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2d62-account-create-update-kh76t" event={"ID":"d65a3154-7da4-4c99-b8d1-e3015e6fd1fe","Type":"ContainerStarted","Data":"8584d3229796c80c13bab93ac14a89ab22ee6afeddb98d2a7da44d2cfa9c5fdf"} Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.851704 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-djphf" podStartSLOduration=1.851684917 podStartE2EDuration="1.851684917s" podCreationTimestamp="2026-01-28 10:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:00:42.848807645 +0000 UTC m=+1095.998472018" watchObservedRunningTime="2026-01-28 10:00:42.851684917 +0000 UTC m=+1096.001349290" Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.881732 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-a22f-account-create-update-5jl8s" podStartSLOduration=1.881715241 podStartE2EDuration="1.881715241s" podCreationTimestamp="2026-01-28 10:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:00:42.875082474 +0000 UTC m=+1096.024746837" watchObservedRunningTime="2026-01-28 10:00:42.881715241 +0000 UTC m=+1096.031379614" Jan 28 10:00:42 crc kubenswrapper[4735]: I0128 10:00:42.902023 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-2d62-account-create-update-kh76t" podStartSLOduration=1.9020064620000001 podStartE2EDuration="1.902006462s" podCreationTimestamp="2026-01-28 10:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:00:42.901382496 +0000 UTC m=+1096.051046859" watchObservedRunningTime="2026-01-28 10:00:42.902006462 +0000 UTC m=+1096.051670835" Jan 28 10:00:43 crc kubenswrapper[4735]: I0128 10:00:43.852031 4735 generic.go:334] "Generic (PLEG): container finished" podID="9eeb0ff9-4319-4e0b-82d8-07d523060b9f" containerID="8ea9d4e3ac4ed6ad2be647d9884e043a1f86a34d98621f183546e60556791092" exitCode=0 Jan 28 10:00:43 crc kubenswrapper[4735]: I0128 10:00:43.852145 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a22f-account-create-update-5jl8s" event={"ID":"9eeb0ff9-4319-4e0b-82d8-07d523060b9f","Type":"ContainerDied","Data":"8ea9d4e3ac4ed6ad2be647d9884e043a1f86a34d98621f183546e60556791092"} Jan 28 10:00:43 crc kubenswrapper[4735]: I0128 10:00:43.855806 4735 generic.go:334] "Generic (PLEG): container finished" podID="d65a3154-7da4-4c99-b8d1-e3015e6fd1fe" containerID="59818cedee9a7d9e36eb957d4d37c122583db2a0ea724fe50d39cfc2ed0850fe" exitCode=0 Jan 28 10:00:43 crc kubenswrapper[4735]: I0128 10:00:43.855886 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2d62-account-create-update-kh76t" event={"ID":"d65a3154-7da4-4c99-b8d1-e3015e6fd1fe","Type":"ContainerDied","Data":"59818cedee9a7d9e36eb957d4d37c122583db2a0ea724fe50d39cfc2ed0850fe"} Jan 28 10:00:43 crc kubenswrapper[4735]: I0128 10:00:43.858111 4735 generic.go:334] "Generic (PLEG): container finished" podID="a278d191-6be1-4cc8-98a1-c78ecf19c1b8" containerID="e0fe4545364e8c14510c4a06bd6dc6181f116bec80d6a79a9b06d0fbdab77f8f" exitCode=0 Jan 28 10:00:43 crc kubenswrapper[4735]: I0128 10:00:43.858185 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-djphf" event={"ID":"a278d191-6be1-4cc8-98a1-c78ecf19c1b8","Type":"ContainerDied","Data":"e0fe4545364e8c14510c4a06bd6dc6181f116bec80d6a79a9b06d0fbdab77f8f"} Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.292964 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-41da-account-create-update-6w5pq" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.301956 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bpqdm" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.379331 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-l9h9d" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.396599 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph7tf\" (UniqueName: \"kubernetes.io/projected/3e71de2a-5efc-430d-bdff-7480ec0f7465-kube-api-access-ph7tf\") pod \"3e71de2a-5efc-430d-bdff-7480ec0f7465\" (UID: \"3e71de2a-5efc-430d-bdff-7480ec0f7465\") " Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.396762 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fb5k\" (UniqueName: \"kubernetes.io/projected/9cb508b8-a325-46e1-9ddb-1ce941cd8d48-kube-api-access-2fb5k\") pod \"9cb508b8-a325-46e1-9ddb-1ce941cd8d48\" (UID: \"9cb508b8-a325-46e1-9ddb-1ce941cd8d48\") " Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.396885 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e71de2a-5efc-430d-bdff-7480ec0f7465-operator-scripts\") pod \"3e71de2a-5efc-430d-bdff-7480ec0f7465\" (UID: \"3e71de2a-5efc-430d-bdff-7480ec0f7465\") " Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.396908 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cb508b8-a325-46e1-9ddb-1ce941cd8d48-operator-scripts\") pod \"9cb508b8-a325-46e1-9ddb-1ce941cd8d48\" (UID: \"9cb508b8-a325-46e1-9ddb-1ce941cd8d48\") " Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.397975 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cb508b8-a325-46e1-9ddb-1ce941cd8d48-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9cb508b8-a325-46e1-9ddb-1ce941cd8d48" (UID: "9cb508b8-a325-46e1-9ddb-1ce941cd8d48"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.398322 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e71de2a-5efc-430d-bdff-7480ec0f7465-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e71de2a-5efc-430d-bdff-7480ec0f7465" (UID: "3e71de2a-5efc-430d-bdff-7480ec0f7465"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.434225 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e71de2a-5efc-430d-bdff-7480ec0f7465-kube-api-access-ph7tf" (OuterVolumeSpecName: "kube-api-access-ph7tf") pod "3e71de2a-5efc-430d-bdff-7480ec0f7465" (UID: "3e71de2a-5efc-430d-bdff-7480ec0f7465"). InnerVolumeSpecName "kube-api-access-ph7tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.436023 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cb508b8-a325-46e1-9ddb-1ce941cd8d48-kube-api-access-2fb5k" (OuterVolumeSpecName: "kube-api-access-2fb5k") pod "9cb508b8-a325-46e1-9ddb-1ce941cd8d48" (UID: "9cb508b8-a325-46e1-9ddb-1ce941cd8d48"). InnerVolumeSpecName "kube-api-access-2fb5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.498047 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b3ba296-5623-443d-9349-61c27620d525-operator-scripts\") pod \"2b3ba296-5623-443d-9349-61c27620d525\" (UID: \"2b3ba296-5623-443d-9349-61c27620d525\") " Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.498238 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb2j8\" (UniqueName: \"kubernetes.io/projected/2b3ba296-5623-443d-9349-61c27620d525-kube-api-access-tb2j8\") pod \"2b3ba296-5623-443d-9349-61c27620d525\" (UID: \"2b3ba296-5623-443d-9349-61c27620d525\") " Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.498583 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e71de2a-5efc-430d-bdff-7480ec0f7465-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.498602 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cb508b8-a325-46e1-9ddb-1ce941cd8d48-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.498611 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph7tf\" (UniqueName: \"kubernetes.io/projected/3e71de2a-5efc-430d-bdff-7480ec0f7465-kube-api-access-ph7tf\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.498621 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fb5k\" (UniqueName: \"kubernetes.io/projected/9cb508b8-a325-46e1-9ddb-1ce941cd8d48-kube-api-access-2fb5k\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.499030 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b3ba296-5623-443d-9349-61c27620d525-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b3ba296-5623-443d-9349-61c27620d525" (UID: "2b3ba296-5623-443d-9349-61c27620d525"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.501845 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b3ba296-5623-443d-9349-61c27620d525-kube-api-access-tb2j8" (OuterVolumeSpecName: "kube-api-access-tb2j8") pod "2b3ba296-5623-443d-9349-61c27620d525" (UID: "2b3ba296-5623-443d-9349-61c27620d525"). InnerVolumeSpecName "kube-api-access-tb2j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.600062 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb2j8\" (UniqueName: \"kubernetes.io/projected/2b3ba296-5623-443d-9349-61c27620d525-kube-api-access-tb2j8\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.600098 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b3ba296-5623-443d-9349-61c27620d525-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.874202 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-41da-account-create-update-6w5pq" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.874302 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-41da-account-create-update-6w5pq" event={"ID":"9cb508b8-a325-46e1-9ddb-1ce941cd8d48","Type":"ContainerDied","Data":"ca2c49bb002fbe19e8ff802b826648ee732a5a14c36667c412b6212abec72cd5"} Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.874340 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca2c49bb002fbe19e8ff802b826648ee732a5a14c36667c412b6212abec72cd5" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.877399 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bpqdm" event={"ID":"3e71de2a-5efc-430d-bdff-7480ec0f7465","Type":"ContainerDied","Data":"59813ff67a83f9d5166477ac652a72292d8c6657d41a4ae904e719327ca667a3"} Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.877447 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59813ff67a83f9d5166477ac652a72292d8c6657d41a4ae904e719327ca667a3" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.877442 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bpqdm" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.881551 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-l9h9d" Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.882649 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-l9h9d" event={"ID":"2b3ba296-5623-443d-9349-61c27620d525","Type":"ContainerDied","Data":"db9ea8befc838b8ad3d55fac7a45de25d9d324df243f3fa0f8701733695f3b13"} Jan 28 10:00:44 crc kubenswrapper[4735]: I0128 10:00:44.882739 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db9ea8befc838b8ad3d55fac7a45de25d9d324df243f3fa0f8701733695f3b13" Jan 28 10:00:46 crc kubenswrapper[4735]: I0128 10:00:46.910193 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2d62-account-create-update-kh76t" event={"ID":"d65a3154-7da4-4c99-b8d1-e3015e6fd1fe","Type":"ContainerDied","Data":"8584d3229796c80c13bab93ac14a89ab22ee6afeddb98d2a7da44d2cfa9c5fdf"} Jan 28 10:00:46 crc kubenswrapper[4735]: I0128 10:00:46.910603 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8584d3229796c80c13bab93ac14a89ab22ee6afeddb98d2a7da44d2cfa9c5fdf" Jan 28 10:00:46 crc kubenswrapper[4735]: I0128 10:00:46.912941 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-djphf" event={"ID":"a278d191-6be1-4cc8-98a1-c78ecf19c1b8","Type":"ContainerDied","Data":"d0db1acc961b7c24ff710b698a5b9d97210d9a92a4e157d7105484ac0bb8c7ed"} Jan 28 10:00:46 crc kubenswrapper[4735]: I0128 10:00:46.912964 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0db1acc961b7c24ff710b698a5b9d97210d9a92a4e157d7105484ac0bb8c7ed" Jan 28 10:00:46 crc kubenswrapper[4735]: I0128 10:00:46.914667 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a22f-account-create-update-5jl8s" event={"ID":"9eeb0ff9-4319-4e0b-82d8-07d523060b9f","Type":"ContainerDied","Data":"19ee0690030b341a6d80ffc7ff237cc173b3312c8ed83907c2c114dea081895b"} Jan 28 10:00:46 crc kubenswrapper[4735]: I0128 10:00:46.914687 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19ee0690030b341a6d80ffc7ff237cc173b3312c8ed83907c2c114dea081895b" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.121763 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a22f-account-create-update-5jl8s" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.144442 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2d62-account-create-update-kh76t" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.154918 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-djphf" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.255265 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqfpm\" (UniqueName: \"kubernetes.io/projected/a278d191-6be1-4cc8-98a1-c78ecf19c1b8-kube-api-access-nqfpm\") pod \"a278d191-6be1-4cc8-98a1-c78ecf19c1b8\" (UID: \"a278d191-6be1-4cc8-98a1-c78ecf19c1b8\") " Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.255404 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmqmk\" (UniqueName: \"kubernetes.io/projected/9eeb0ff9-4319-4e0b-82d8-07d523060b9f-kube-api-access-rmqmk\") pod \"9eeb0ff9-4319-4e0b-82d8-07d523060b9f\" (UID: \"9eeb0ff9-4319-4e0b-82d8-07d523060b9f\") " Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.255451 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eeb0ff9-4319-4e0b-82d8-07d523060b9f-operator-scripts\") pod \"9eeb0ff9-4319-4e0b-82d8-07d523060b9f\" (UID: \"9eeb0ff9-4319-4e0b-82d8-07d523060b9f\") " Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.255484 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d65a3154-7da4-4c99-b8d1-e3015e6fd1fe-operator-scripts\") pod \"d65a3154-7da4-4c99-b8d1-e3015e6fd1fe\" (UID: \"d65a3154-7da4-4c99-b8d1-e3015e6fd1fe\") " Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.255603 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a278d191-6be1-4cc8-98a1-c78ecf19c1b8-operator-scripts\") pod \"a278d191-6be1-4cc8-98a1-c78ecf19c1b8\" (UID: \"a278d191-6be1-4cc8-98a1-c78ecf19c1b8\") " Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.255653 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2whdf\" (UniqueName: \"kubernetes.io/projected/d65a3154-7da4-4c99-b8d1-e3015e6fd1fe-kube-api-access-2whdf\") pod \"d65a3154-7da4-4c99-b8d1-e3015e6fd1fe\" (UID: \"d65a3154-7da4-4c99-b8d1-e3015e6fd1fe\") " Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.256796 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a278d191-6be1-4cc8-98a1-c78ecf19c1b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a278d191-6be1-4cc8-98a1-c78ecf19c1b8" (UID: "a278d191-6be1-4cc8-98a1-c78ecf19c1b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.256866 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d65a3154-7da4-4c99-b8d1-e3015e6fd1fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d65a3154-7da4-4c99-b8d1-e3015e6fd1fe" (UID: "d65a3154-7da4-4c99-b8d1-e3015e6fd1fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.257049 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eeb0ff9-4319-4e0b-82d8-07d523060b9f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9eeb0ff9-4319-4e0b-82d8-07d523060b9f" (UID: "9eeb0ff9-4319-4e0b-82d8-07d523060b9f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.260319 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eeb0ff9-4319-4e0b-82d8-07d523060b9f-kube-api-access-rmqmk" (OuterVolumeSpecName: "kube-api-access-rmqmk") pod "9eeb0ff9-4319-4e0b-82d8-07d523060b9f" (UID: "9eeb0ff9-4319-4e0b-82d8-07d523060b9f"). InnerVolumeSpecName "kube-api-access-rmqmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.263058 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d65a3154-7da4-4c99-b8d1-e3015e6fd1fe-kube-api-access-2whdf" (OuterVolumeSpecName: "kube-api-access-2whdf") pod "d65a3154-7da4-4c99-b8d1-e3015e6fd1fe" (UID: "d65a3154-7da4-4c99-b8d1-e3015e6fd1fe"). InnerVolumeSpecName "kube-api-access-2whdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.265927 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a278d191-6be1-4cc8-98a1-c78ecf19c1b8-kube-api-access-nqfpm" (OuterVolumeSpecName: "kube-api-access-nqfpm") pod "a278d191-6be1-4cc8-98a1-c78ecf19c1b8" (UID: "a278d191-6be1-4cc8-98a1-c78ecf19c1b8"). InnerVolumeSpecName "kube-api-access-nqfpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.358160 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqfpm\" (UniqueName: \"kubernetes.io/projected/a278d191-6be1-4cc8-98a1-c78ecf19c1b8-kube-api-access-nqfpm\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.358206 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmqmk\" (UniqueName: \"kubernetes.io/projected/9eeb0ff9-4319-4e0b-82d8-07d523060b9f-kube-api-access-rmqmk\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.358219 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eeb0ff9-4319-4e0b-82d8-07d523060b9f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.358232 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d65a3154-7da4-4c99-b8d1-e3015e6fd1fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.358245 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a278d191-6be1-4cc8-98a1-c78ecf19c1b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.358256 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2whdf\" (UniqueName: \"kubernetes.io/projected/d65a3154-7da4-4c99-b8d1-e3015e6fd1fe-kube-api-access-2whdf\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.930599 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-c2m26" event={"ID":"da69c572-110e-4caf-aa51-a86700885cb5","Type":"ContainerStarted","Data":"125154641b1da1358a3b6cbd37cda051a4312479fa1f779cff3c89ea80066db8"} Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.930657 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2d62-account-create-update-kh76t" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.930686 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-djphf" Jan 28 10:00:47 crc kubenswrapper[4735]: I0128 10:00:47.930797 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a22f-account-create-update-5jl8s" Jan 28 10:00:48 crc kubenswrapper[4735]: I0128 10:00:48.162583 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-c2m26" podStartSLOduration=2.576934339 podStartE2EDuration="7.162556345s" podCreationTimestamp="2026-01-28 10:00:41 +0000 UTC" firstStartedPulling="2026-01-28 10:00:42.359029518 +0000 UTC m=+1095.508693891" lastFinishedPulling="2026-01-28 10:00:46.944651524 +0000 UTC m=+1100.094315897" observedRunningTime="2026-01-28 10:00:47.947531081 +0000 UTC m=+1101.097195454" watchObservedRunningTime="2026-01-28 10:00:48.162556345 +0000 UTC m=+1101.312220748" Jan 28 10:00:49 crc kubenswrapper[4735]: I0128 10:00:49.547055 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:00:49 crc kubenswrapper[4735]: I0128 10:00:49.629232 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hkwcb"] Jan 28 10:00:49 crc kubenswrapper[4735]: I0128 10:00:49.629667 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-hkwcb" podUID="902423c1-75ea-4c5a-87d3-d77dbdbc6d85" containerName="dnsmasq-dns" containerID="cri-o://9330000dbe84fdc05c70e8cf182bd95e084855522c5e98afe2982f7c1b6cfed5" gracePeriod=10 Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:49.975878 4735 generic.go:334] "Generic (PLEG): container finished" podID="902423c1-75ea-4c5a-87d3-d77dbdbc6d85" containerID="9330000dbe84fdc05c70e8cf182bd95e084855522c5e98afe2982f7c1b6cfed5" exitCode=0 Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:49.975927 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hkwcb" event={"ID":"902423c1-75ea-4c5a-87d3-d77dbdbc6d85","Type":"ContainerDied","Data":"9330000dbe84fdc05c70e8cf182bd95e084855522c5e98afe2982f7c1b6cfed5"} Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.107887 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.209709 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-ovsdbserver-sb\") pod \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.209810 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-dns-svc\") pod \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.209852 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-ovsdbserver-nb\") pod \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.209885 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rlnx\" (UniqueName: \"kubernetes.io/projected/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-kube-api-access-6rlnx\") pod \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.210012 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-config\") pod \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\" (UID: \"902423c1-75ea-4c5a-87d3-d77dbdbc6d85\") " Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.218428 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-kube-api-access-6rlnx" (OuterVolumeSpecName: "kube-api-access-6rlnx") pod "902423c1-75ea-4c5a-87d3-d77dbdbc6d85" (UID: "902423c1-75ea-4c5a-87d3-d77dbdbc6d85"). InnerVolumeSpecName "kube-api-access-6rlnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.253060 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "902423c1-75ea-4c5a-87d3-d77dbdbc6d85" (UID: "902423c1-75ea-4c5a-87d3-d77dbdbc6d85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.254113 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-config" (OuterVolumeSpecName: "config") pod "902423c1-75ea-4c5a-87d3-d77dbdbc6d85" (UID: "902423c1-75ea-4c5a-87d3-d77dbdbc6d85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.264181 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "902423c1-75ea-4c5a-87d3-d77dbdbc6d85" (UID: "902423c1-75ea-4c5a-87d3-d77dbdbc6d85"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.272652 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "902423c1-75ea-4c5a-87d3-d77dbdbc6d85" (UID: "902423c1-75ea-4c5a-87d3-d77dbdbc6d85"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.311617 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.311651 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.311671 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.311685 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rlnx\" (UniqueName: \"kubernetes.io/projected/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-kube-api-access-6rlnx\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.311696 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/902423c1-75ea-4c5a-87d3-d77dbdbc6d85-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.985352 4735 generic.go:334] "Generic (PLEG): container finished" podID="da69c572-110e-4caf-aa51-a86700885cb5" containerID="125154641b1da1358a3b6cbd37cda051a4312479fa1f779cff3c89ea80066db8" exitCode=0 Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.985413 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-c2m26" event={"ID":"da69c572-110e-4caf-aa51-a86700885cb5","Type":"ContainerDied","Data":"125154641b1da1358a3b6cbd37cda051a4312479fa1f779cff3c89ea80066db8"} Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.988384 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hkwcb" event={"ID":"902423c1-75ea-4c5a-87d3-d77dbdbc6d85","Type":"ContainerDied","Data":"5f7d30ecb701214977fb7da4f0fe1e7f95a78ce62c558016c0e6e1be939956af"} Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.988426 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hkwcb" Jan 28 10:00:50 crc kubenswrapper[4735]: I0128 10:00:50.988444 4735 scope.go:117] "RemoveContainer" containerID="9330000dbe84fdc05c70e8cf182bd95e084855522c5e98afe2982f7c1b6cfed5" Jan 28 10:00:51 crc kubenswrapper[4735]: I0128 10:00:51.018845 4735 scope.go:117] "RemoveContainer" containerID="e74694f6382344cbe432bb1be2091d9cbd137f0183158a3439adafcf4945f272" Jan 28 10:00:51 crc kubenswrapper[4735]: I0128 10:00:51.041844 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hkwcb"] Jan 28 10:00:51 crc kubenswrapper[4735]: I0128 10:00:51.064199 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hkwcb"] Jan 28 10:00:51 crc kubenswrapper[4735]: I0128 10:00:51.520041 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="902423c1-75ea-4c5a-87d3-d77dbdbc6d85" path="/var/lib/kubelet/pods/902423c1-75ea-4c5a-87d3-d77dbdbc6d85/volumes" Jan 28 10:00:52 crc kubenswrapper[4735]: I0128 10:00:52.325411 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-c2m26" Jan 28 10:00:52 crc kubenswrapper[4735]: I0128 10:00:52.453910 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da69c572-110e-4caf-aa51-a86700885cb5-config-data\") pod \"da69c572-110e-4caf-aa51-a86700885cb5\" (UID: \"da69c572-110e-4caf-aa51-a86700885cb5\") " Jan 28 10:00:52 crc kubenswrapper[4735]: I0128 10:00:52.454019 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da69c572-110e-4caf-aa51-a86700885cb5-combined-ca-bundle\") pod \"da69c572-110e-4caf-aa51-a86700885cb5\" (UID: \"da69c572-110e-4caf-aa51-a86700885cb5\") " Jan 28 10:00:52 crc kubenswrapper[4735]: I0128 10:00:52.454085 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjxr7\" (UniqueName: \"kubernetes.io/projected/da69c572-110e-4caf-aa51-a86700885cb5-kube-api-access-gjxr7\") pod \"da69c572-110e-4caf-aa51-a86700885cb5\" (UID: \"da69c572-110e-4caf-aa51-a86700885cb5\") " Jan 28 10:00:52 crc kubenswrapper[4735]: I0128 10:00:52.459084 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da69c572-110e-4caf-aa51-a86700885cb5-kube-api-access-gjxr7" (OuterVolumeSpecName: "kube-api-access-gjxr7") pod "da69c572-110e-4caf-aa51-a86700885cb5" (UID: "da69c572-110e-4caf-aa51-a86700885cb5"). InnerVolumeSpecName "kube-api-access-gjxr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:52 crc kubenswrapper[4735]: I0128 10:00:52.475136 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da69c572-110e-4caf-aa51-a86700885cb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da69c572-110e-4caf-aa51-a86700885cb5" (UID: "da69c572-110e-4caf-aa51-a86700885cb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:00:52 crc kubenswrapper[4735]: I0128 10:00:52.514850 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da69c572-110e-4caf-aa51-a86700885cb5-config-data" (OuterVolumeSpecName: "config-data") pod "da69c572-110e-4caf-aa51-a86700885cb5" (UID: "da69c572-110e-4caf-aa51-a86700885cb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:00:52 crc kubenswrapper[4735]: I0128 10:00:52.557330 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da69c572-110e-4caf-aa51-a86700885cb5-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:52 crc kubenswrapper[4735]: I0128 10:00:52.557376 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da69c572-110e-4caf-aa51-a86700885cb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:52 crc kubenswrapper[4735]: I0128 10:00:52.557400 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjxr7\" (UniqueName: \"kubernetes.io/projected/da69c572-110e-4caf-aa51-a86700885cb5-kube-api-access-gjxr7\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.010536 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-c2m26" event={"ID":"da69c572-110e-4caf-aa51-a86700885cb5","Type":"ContainerDied","Data":"67fe3b56ebd1941aa9f822f5c0962f56478e42ceda1a0fc7c49871ce5f5026bf"} Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.010892 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67fe3b56ebd1941aa9f822f5c0962f56478e42ceda1a0fc7c49871ce5f5026bf" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.010608 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-c2m26" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.298740 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pgzlg"] Jan 28 10:00:53 crc kubenswrapper[4735]: E0128 10:00:53.299095 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e71de2a-5efc-430d-bdff-7480ec0f7465" containerName="mariadb-database-create" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299111 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e71de2a-5efc-430d-bdff-7480ec0f7465" containerName="mariadb-database-create" Jan 28 10:00:53 crc kubenswrapper[4735]: E0128 10:00:53.299125 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902423c1-75ea-4c5a-87d3-d77dbdbc6d85" containerName="init" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299131 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="902423c1-75ea-4c5a-87d3-d77dbdbc6d85" containerName="init" Jan 28 10:00:53 crc kubenswrapper[4735]: E0128 10:00:53.299142 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da69c572-110e-4caf-aa51-a86700885cb5" containerName="keystone-db-sync" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299150 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="da69c572-110e-4caf-aa51-a86700885cb5" containerName="keystone-db-sync" Jan 28 10:00:53 crc kubenswrapper[4735]: E0128 10:00:53.299163 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb508b8-a325-46e1-9ddb-1ce941cd8d48" containerName="mariadb-account-create-update" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299169 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb508b8-a325-46e1-9ddb-1ce941cd8d48" containerName="mariadb-account-create-update" Jan 28 10:00:53 crc kubenswrapper[4735]: E0128 10:00:53.299177 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d65a3154-7da4-4c99-b8d1-e3015e6fd1fe" containerName="mariadb-account-create-update" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299183 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="d65a3154-7da4-4c99-b8d1-e3015e6fd1fe" containerName="mariadb-account-create-update" Jan 28 10:00:53 crc kubenswrapper[4735]: E0128 10:00:53.299199 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a278d191-6be1-4cc8-98a1-c78ecf19c1b8" containerName="mariadb-database-create" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299205 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="a278d191-6be1-4cc8-98a1-c78ecf19c1b8" containerName="mariadb-database-create" Jan 28 10:00:53 crc kubenswrapper[4735]: E0128 10:00:53.299218 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3ba296-5623-443d-9349-61c27620d525" containerName="mariadb-database-create" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299224 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3ba296-5623-443d-9349-61c27620d525" containerName="mariadb-database-create" Jan 28 10:00:53 crc kubenswrapper[4735]: E0128 10:00:53.299232 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eeb0ff9-4319-4e0b-82d8-07d523060b9f" containerName="mariadb-account-create-update" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299238 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eeb0ff9-4319-4e0b-82d8-07d523060b9f" containerName="mariadb-account-create-update" Jan 28 10:00:53 crc kubenswrapper[4735]: E0128 10:00:53.299252 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902423c1-75ea-4c5a-87d3-d77dbdbc6d85" containerName="dnsmasq-dns" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299258 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="902423c1-75ea-4c5a-87d3-d77dbdbc6d85" containerName="dnsmasq-dns" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299409 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb508b8-a325-46e1-9ddb-1ce941cd8d48" containerName="mariadb-account-create-update" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299419 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="da69c572-110e-4caf-aa51-a86700885cb5" containerName="keystone-db-sync" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299427 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="a278d191-6be1-4cc8-98a1-c78ecf19c1b8" containerName="mariadb-database-create" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299437 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eeb0ff9-4319-4e0b-82d8-07d523060b9f" containerName="mariadb-account-create-update" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299460 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="902423c1-75ea-4c5a-87d3-d77dbdbc6d85" containerName="dnsmasq-dns" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299473 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3ba296-5623-443d-9349-61c27620d525" containerName="mariadb-database-create" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299479 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e71de2a-5efc-430d-bdff-7480ec0f7465" containerName="mariadb-database-create" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299486 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="d65a3154-7da4-4c99-b8d1-e3015e6fd1fe" containerName="mariadb-account-create-update" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.299968 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.309007 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.309098 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.334513 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.334741 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.335105 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pnjpl" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.342806 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pgzlg"] Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.353528 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-htq8v"] Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.354738 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.372566 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-combined-ca-bundle\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.372854 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-scripts\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.372962 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-config-data\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.373050 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-fernet-keys\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.373153 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvnq8\" (UniqueName: \"kubernetes.io/projected/430990db-5e89-4e89-86d6-529401e2b670-kube-api-access-dvnq8\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.373316 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-credential-keys\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.384697 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-htq8v"] Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.475519 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-combined-ca-bundle\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.475569 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-config\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.475605 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkl84\" (UniqueName: \"kubernetes.io/projected/8189d350-9171-406e-8f95-500dc792c129-kube-api-access-tkl84\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.475627 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.475672 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-scripts\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.475693 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-config-data\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.475711 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-fernet-keys\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.475731 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvnq8\" (UniqueName: \"kubernetes.io/projected/430990db-5e89-4e89-86d6-529401e2b670-kube-api-access-dvnq8\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.475779 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.475801 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.475821 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-credential-keys\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.475840 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.487383 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-combined-ca-bundle\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.488957 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-config-data\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.493435 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-fernet-keys\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.494544 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-scripts\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.507221 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-credential-keys\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.539493 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvnq8\" (UniqueName: \"kubernetes.io/projected/430990db-5e89-4e89-86d6-529401e2b670-kube-api-access-dvnq8\") pod \"keystone-bootstrap-pgzlg\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.591575 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.591638 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.591701 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.591800 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-config\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.591864 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkl84\" (UniqueName: \"kubernetes.io/projected/8189d350-9171-406e-8f95-500dc792c129-kube-api-access-tkl84\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.591886 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.593253 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.595377 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.597029 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.597765 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-config\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.601096 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.637718 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.679166 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkl84\" (UniqueName: \"kubernetes.io/projected/8189d350-9171-406e-8f95-500dc792c129-kube-api-access-tkl84\") pod \"dnsmasq-dns-5c5cc7c5ff-htq8v\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.681592 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-jn5j5"] Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.694166 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.702094 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wx4vf" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.711192 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-86f45b6977-b8wpp"] Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.712635 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.724230 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.724278 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.724526 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-9zrxx" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.724631 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.724818 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.725308 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.734638 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jn5j5"] Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.745238 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86f45b6977-b8wpp"] Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.795706 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzljk\" (UniqueName: \"kubernetes.io/projected/abbfe5cf-00af-4a6b-921e-401f549a7c8e-kube-api-access-hzljk\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.795781 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/372d90e2-29ec-4a20-9a36-8073b54ea7b0-config-data\") pod \"horizon-86f45b6977-b8wpp\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.795809 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-config-data\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.795835 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abbfe5cf-00af-4a6b-921e-401f549a7c8e-etc-machine-id\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.795878 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/372d90e2-29ec-4a20-9a36-8073b54ea7b0-horizon-secret-key\") pod \"horizon-86f45b6977-b8wpp\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.795907 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-db-sync-config-data\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.795936 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/372d90e2-29ec-4a20-9a36-8073b54ea7b0-scripts\") pod \"horizon-86f45b6977-b8wpp\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.795968 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-combined-ca-bundle\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.796000 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6b5j\" (UniqueName: \"kubernetes.io/projected/372d90e2-29ec-4a20-9a36-8073b54ea7b0-kube-api-access-d6b5j\") pod \"horizon-86f45b6977-b8wpp\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.796031 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-scripts\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.796062 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/372d90e2-29ec-4a20-9a36-8073b54ea7b0-logs\") pod \"horizon-86f45b6977-b8wpp\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.859862 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-5ghbh"] Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.860857 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5ghbh" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.867233 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-mglps" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.876967 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.877342 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.897733 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/372d90e2-29ec-4a20-9a36-8073b54ea7b0-scripts\") pod \"horizon-86f45b6977-b8wpp\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.897792 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-combined-ca-bundle\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.897828 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6b5j\" (UniqueName: \"kubernetes.io/projected/372d90e2-29ec-4a20-9a36-8073b54ea7b0-kube-api-access-d6b5j\") pod \"horizon-86f45b6977-b8wpp\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.897860 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-scripts\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.897889 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/372d90e2-29ec-4a20-9a36-8073b54ea7b0-logs\") pod \"horizon-86f45b6977-b8wpp\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.897935 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzljk\" (UniqueName: \"kubernetes.io/projected/abbfe5cf-00af-4a6b-921e-401f549a7c8e-kube-api-access-hzljk\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.897963 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/372d90e2-29ec-4a20-9a36-8073b54ea7b0-config-data\") pod \"horizon-86f45b6977-b8wpp\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.897978 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-config-data\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.897996 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abbfe5cf-00af-4a6b-921e-401f549a7c8e-etc-machine-id\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.898031 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/372d90e2-29ec-4a20-9a36-8073b54ea7b0-horizon-secret-key\") pod \"horizon-86f45b6977-b8wpp\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.898054 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-db-sync-config-data\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.906808 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-db-sync-config-data\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.907286 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/372d90e2-29ec-4a20-9a36-8073b54ea7b0-logs\") pod \"horizon-86f45b6977-b8wpp\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.907733 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/372d90e2-29ec-4a20-9a36-8073b54ea7b0-scripts\") pod \"horizon-86f45b6977-b8wpp\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.911110 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/372d90e2-29ec-4a20-9a36-8073b54ea7b0-config-data\") pod \"horizon-86f45b6977-b8wpp\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.915886 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abbfe5cf-00af-4a6b-921e-401f549a7c8e-etc-machine-id\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.931805 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-scripts\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.931891 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/372d90e2-29ec-4a20-9a36-8073b54ea7b0-horizon-secret-key\") pod \"horizon-86f45b6977-b8wpp\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.932396 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-config-data\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.942562 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-combined-ca-bundle\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.968403 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzljk\" (UniqueName: \"kubernetes.io/projected/abbfe5cf-00af-4a6b-921e-401f549a7c8e-kube-api-access-hzljk\") pod \"cinder-db-sync-jn5j5\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.978277 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.984320 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6b5j\" (UniqueName: \"kubernetes.io/projected/372d90e2-29ec-4a20-9a36-8073b54ea7b0-kube-api-access-d6b5j\") pod \"horizon-86f45b6977-b8wpp\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.999208 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-combined-ca-bundle\") pod \"neutron-db-sync-5ghbh\" (UID: \"c0e8dd26-845d-43ff-bc2d-fd6f4808736f\") " pod="openstack/neutron-db-sync-5ghbh" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.999295 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5vtk\" (UniqueName: \"kubernetes.io/projected/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-kube-api-access-w5vtk\") pod \"neutron-db-sync-5ghbh\" (UID: \"c0e8dd26-845d-43ff-bc2d-fd6f4808736f\") " pod="openstack/neutron-db-sync-5ghbh" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.999361 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-config\") pod \"neutron-db-sync-5ghbh\" (UID: \"c0e8dd26-845d-43ff-bc2d-fd6f4808736f\") " pod="openstack/neutron-db-sync-5ghbh" Jan 28 10:00:53 crc kubenswrapper[4735]: I0128 10:00:53.999455 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-khsht"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.000481 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-khsht" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.004732 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.015059 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-j2vph" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.021964 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5ghbh"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.084146 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.089560 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.091567 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.091822 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.100366 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5vtk\" (UniqueName: \"kubernetes.io/projected/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-kube-api-access-w5vtk\") pod \"neutron-db-sync-5ghbh\" (UID: \"c0e8dd26-845d-43ff-bc2d-fd6f4808736f\") " pod="openstack/neutron-db-sync-5ghbh" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.100446 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-config\") pod \"neutron-db-sync-5ghbh\" (UID: \"c0e8dd26-845d-43ff-bc2d-fd6f4808736f\") " pod="openstack/neutron-db-sync-5ghbh" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.100483 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-db-sync-config-data\") pod \"barbican-db-sync-khsht\" (UID: \"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c\") " pod="openstack/barbican-db-sync-khsht" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.100518 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86jk2\" (UniqueName: \"kubernetes.io/projected/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-kube-api-access-86jk2\") pod \"barbican-db-sync-khsht\" (UID: \"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c\") " pod="openstack/barbican-db-sync-khsht" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.100547 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-combined-ca-bundle\") pod \"neutron-db-sync-5ghbh\" (UID: \"c0e8dd26-845d-43ff-bc2d-fd6f4808736f\") " pod="openstack/neutron-db-sync-5ghbh" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.100564 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-combined-ca-bundle\") pod \"barbican-db-sync-khsht\" (UID: \"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c\") " pod="openstack/barbican-db-sync-khsht" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.103897 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-khsht"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.108832 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-combined-ca-bundle\") pod \"neutron-db-sync-5ghbh\" (UID: \"c0e8dd26-845d-43ff-bc2d-fd6f4808736f\") " pod="openstack/neutron-db-sync-5ghbh" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.109224 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.118410 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-config\") pod \"neutron-db-sync-5ghbh\" (UID: \"c0e8dd26-845d-43ff-bc2d-fd6f4808736f\") " pod="openstack/neutron-db-sync-5ghbh" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.122590 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.128829 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5vtk\" (UniqueName: \"kubernetes.io/projected/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-kube-api-access-w5vtk\") pod \"neutron-db-sync-5ghbh\" (UID: \"c0e8dd26-845d-43ff-bc2d-fd6f4808736f\") " pod="openstack/neutron-db-sync-5ghbh" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.197563 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5ghbh" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.201737 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98k45\" (UniqueName: \"kubernetes.io/projected/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-kube-api-access-98k45\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.201802 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-scripts\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.201837 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-db-sync-config-data\") pod \"barbican-db-sync-khsht\" (UID: \"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c\") " pod="openstack/barbican-db-sync-khsht" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.201875 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86jk2\" (UniqueName: \"kubernetes.io/projected/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-kube-api-access-86jk2\") pod \"barbican-db-sync-khsht\" (UID: \"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c\") " pod="openstack/barbican-db-sync-khsht" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.201915 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-combined-ca-bundle\") pod \"barbican-db-sync-khsht\" (UID: \"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c\") " pod="openstack/barbican-db-sync-khsht" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.201958 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-log-httpd\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.201996 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.202038 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.202087 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-config-data\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.202122 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-run-httpd\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.207272 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-db-sync-config-data\") pod \"barbican-db-sync-khsht\" (UID: \"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c\") " pod="openstack/barbican-db-sync-khsht" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.223606 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-combined-ca-bundle\") pod \"barbican-db-sync-khsht\" (UID: \"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c\") " pod="openstack/barbican-db-sync-khsht" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.228787 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.236706 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86jk2\" (UniqueName: \"kubernetes.io/projected/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-kube-api-access-86jk2\") pod \"barbican-db-sync-khsht\" (UID: \"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c\") " pod="openstack/barbican-db-sync-khsht" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.247109 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-647b785cdf-524zw"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.248508 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.273253 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-647b785cdf-524zw"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.287636 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-v5th8"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.288934 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v5th8" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.291596 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-szmvb" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.291881 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.293206 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.303565 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-config-data\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.303611 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-run-httpd\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.303658 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98k45\" (UniqueName: \"kubernetes.io/projected/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-kube-api-access-98k45\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.303679 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-scripts\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.303702 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae47bfd0-190f-4281-849f-d6892f20036e-scripts\") pod \"horizon-647b785cdf-524zw\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.304389 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-run-httpd\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.307067 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae47bfd0-190f-4281-849f-d6892f20036e-logs\") pod \"horizon-647b785cdf-524zw\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.307150 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae47bfd0-190f-4281-849f-d6892f20036e-config-data\") pod \"horizon-647b785cdf-524zw\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.307197 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-log-httpd\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.307248 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nztc\" (UniqueName: \"kubernetes.io/projected/ae47bfd0-190f-4281-849f-d6892f20036e-kube-api-access-8nztc\") pod \"horizon-647b785cdf-524zw\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.307284 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.307351 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.307409 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ae47bfd0-190f-4281-849f-d6892f20036e-horizon-secret-key\") pod \"horizon-647b785cdf-524zw\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.307782 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-log-httpd\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.308935 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-config-data\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.312801 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-scripts\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.315120 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.323177 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.324350 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-v5th8"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.328346 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98k45\" (UniqueName: \"kubernetes.io/projected/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-kube-api-access-98k45\") pod \"ceilometer-0\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.350728 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-htq8v"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.363851 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.365719 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.367951 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.368113 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jmbxn" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.368373 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.368486 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.368846 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-khsht" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.388720 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.401600 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-vd94t"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.402898 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.408460 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae47bfd0-190f-4281-849f-d6892f20036e-scripts\") pod \"horizon-647b785cdf-524zw\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.408513 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-config-data\") pod \"placement-db-sync-v5th8\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " pod="openstack/placement-db-sync-v5th8" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.408541 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9msvz\" (UniqueName: \"kubernetes.io/projected/1891b5eb-afe2-4e76-b3bd-2a141a784a13-kube-api-access-9msvz\") pod \"placement-db-sync-v5th8\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " pod="openstack/placement-db-sync-v5th8" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.408570 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.408594 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-combined-ca-bundle\") pod \"placement-db-sync-v5th8\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " pod="openstack/placement-db-sync-v5th8" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.408621 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae47bfd0-190f-4281-849f-d6892f20036e-logs\") pod \"horizon-647b785cdf-524zw\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.408649 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae47bfd0-190f-4281-849f-d6892f20036e-config-data\") pod \"horizon-647b785cdf-524zw\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.408669 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.408712 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nztc\" (UniqueName: \"kubernetes.io/projected/ae47bfd0-190f-4281-849f-d6892f20036e-kube-api-access-8nztc\") pod \"horizon-647b785cdf-524zw\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.408737 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.415558 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.415642 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-scripts\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.415688 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1891b5eb-afe2-4e76-b3bd-2a141a784a13-logs\") pod \"placement-db-sync-v5th8\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " pod="openstack/placement-db-sync-v5th8" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.415721 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-scripts\") pod \"placement-db-sync-v5th8\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " pod="openstack/placement-db-sync-v5th8" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.415793 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae47bfd0-190f-4281-849f-d6892f20036e-scripts\") pod \"horizon-647b785cdf-524zw\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.411774 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae47bfd0-190f-4281-849f-d6892f20036e-logs\") pod \"horizon-647b785cdf-524zw\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.411218 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae47bfd0-190f-4281-849f-d6892f20036e-config-data\") pod \"horizon-647b785cdf-524zw\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.415839 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ae47bfd0-190f-4281-849f-d6892f20036e-horizon-secret-key\") pod \"horizon-647b785cdf-524zw\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.418030 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-config-data\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.418105 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfn7d\" (UniqueName: \"kubernetes.io/projected/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-kube-api-access-nfn7d\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.418194 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-logs\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.415880 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-vd94t"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.431923 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.433289 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.437876 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.443932 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.444243 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.446190 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.465372 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ae47bfd0-190f-4281-849f-d6892f20036e-horizon-secret-key\") pod \"horizon-647b785cdf-524zw\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.467530 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nztc\" (UniqueName: \"kubernetes.io/projected/ae47bfd0-190f-4281-849f-d6892f20036e-kube-api-access-8nztc\") pod \"horizon-647b785cdf-524zw\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.519724 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c57e8d73-db2b-4a13-8acc-625be70fcdce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.519803 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.519838 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.519857 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c57e8d73-db2b-4a13-8acc-625be70fcdce-logs\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.519888 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.519908 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-scripts\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.519926 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1891b5eb-afe2-4e76-b3bd-2a141a784a13-logs\") pod \"placement-db-sync-v5th8\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " pod="openstack/placement-db-sync-v5th8" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.519944 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-scripts\") pod \"placement-db-sync-v5th8\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " pod="openstack/placement-db-sync-v5th8" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.519967 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.519999 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-config-data\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520025 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520044 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520071 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfn7d\" (UniqueName: \"kubernetes.io/projected/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-kube-api-access-nfn7d\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520106 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520124 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4jg6\" (UniqueName: \"kubernetes.io/projected/9260c92d-b507-4f85-8b91-6202f3c6e393-kube-api-access-v4jg6\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520150 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-logs\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520167 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps9fx\" (UniqueName: \"kubernetes.io/projected/c57e8d73-db2b-4a13-8acc-625be70fcdce-kube-api-access-ps9fx\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520185 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-config\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520205 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-config-data\") pod \"placement-db-sync-v5th8\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " pod="openstack/placement-db-sync-v5th8" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520227 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9msvz\" (UniqueName: \"kubernetes.io/projected/1891b5eb-afe2-4e76-b3bd-2a141a784a13-kube-api-access-9msvz\") pod \"placement-db-sync-v5th8\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " pod="openstack/placement-db-sync-v5th8" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520245 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520264 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520281 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-combined-ca-bundle\") pod \"placement-db-sync-v5th8\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " pod="openstack/placement-db-sync-v5th8" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520301 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520327 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520356 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520373 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520724 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.520979 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.524106 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-logs\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.525308 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1891b5eb-afe2-4e76-b3bd-2a141a784a13-logs\") pod \"placement-db-sync-v5th8\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " pod="openstack/placement-db-sync-v5th8" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.531171 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.531450 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-scripts\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.533314 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-config-data\") pod \"placement-db-sync-v5th8\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " pod="openstack/placement-db-sync-v5th8" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.533492 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-config-data\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.548164 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.554091 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-combined-ca-bundle\") pod \"placement-db-sync-v5th8\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " pod="openstack/placement-db-sync-v5th8" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.559467 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9msvz\" (UniqueName: \"kubernetes.io/projected/1891b5eb-afe2-4e76-b3bd-2a141a784a13-kube-api-access-9msvz\") pod \"placement-db-sync-v5th8\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " pod="openstack/placement-db-sync-v5th8" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.561113 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfn7d\" (UniqueName: \"kubernetes.io/projected/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-kube-api-access-nfn7d\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.568168 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-scripts\") pod \"placement-db-sync-v5th8\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " pod="openstack/placement-db-sync-v5th8" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.583561 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.590819 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pgzlg"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.602781 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.614027 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v5th8" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.623465 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c57e8d73-db2b-4a13-8acc-625be70fcdce-logs\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.623610 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.623666 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.623691 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.623790 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.623819 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4jg6\" (UniqueName: \"kubernetes.io/projected/9260c92d-b507-4f85-8b91-6202f3c6e393-kube-api-access-v4jg6\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.623850 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps9fx\" (UniqueName: \"kubernetes.io/projected/c57e8d73-db2b-4a13-8acc-625be70fcdce-kube-api-access-ps9fx\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.623872 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-config\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.623919 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.623963 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.623996 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.624042 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.624089 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c57e8d73-db2b-4a13-8acc-625be70fcdce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.624112 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.625993 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.629014 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.629165 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.630727 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.631293 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c57e8d73-db2b-4a13-8acc-625be70fcdce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.632251 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.632976 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-config\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.633472 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c57e8d73-db2b-4a13-8acc-625be70fcdce-logs\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.633645 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.639330 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.641122 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.664969 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4jg6\" (UniqueName: \"kubernetes.io/projected/9260c92d-b507-4f85-8b91-6202f3c6e393-kube-api-access-v4jg6\") pod \"dnsmasq-dns-8b5c85b87-vd94t\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.667238 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.682455 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps9fx\" (UniqueName: \"kubernetes.io/projected/c57e8d73-db2b-4a13-8acc-625be70fcdce-kube-api-access-ps9fx\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.692250 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.791718 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-htq8v"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.817415 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.903255 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.926765 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.986961 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5ghbh"] Jan 28 10:00:54 crc kubenswrapper[4735]: I0128 10:00:54.994431 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86f45b6977-b8wpp"] Jan 28 10:00:55 crc kubenswrapper[4735]: I0128 10:00:55.023305 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jn5j5"] Jan 28 10:00:55 crc kubenswrapper[4735]: W0128 10:00:55.058816 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabbfe5cf_00af_4a6b_921e_401f549a7c8e.slice/crio-4c54851fccbe81ec9c5c415f300c461673de26151f72b3bbeca418b4171b4975 WatchSource:0}: Error finding container 4c54851fccbe81ec9c5c415f300c461673de26151f72b3bbeca418b4171b4975: Status 404 returned error can't find the container with id 4c54851fccbe81ec9c5c415f300c461673de26151f72b3bbeca418b4171b4975 Jan 28 10:00:55 crc kubenswrapper[4735]: I0128 10:00:55.088734 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86f45b6977-b8wpp" event={"ID":"372d90e2-29ec-4a20-9a36-8073b54ea7b0","Type":"ContainerStarted","Data":"8146e8cfb506d776f4f56c3632da62c866ff2fd0ce791748c0a20ea043fc6e3a"} Jan 28 10:00:55 crc kubenswrapper[4735]: I0128 10:00:55.092290 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jn5j5" event={"ID":"abbfe5cf-00af-4a6b-921e-401f549a7c8e","Type":"ContainerStarted","Data":"4c54851fccbe81ec9c5c415f300c461673de26151f72b3bbeca418b4171b4975"} Jan 28 10:00:55 crc kubenswrapper[4735]: I0128 10:00:55.094281 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" event={"ID":"8189d350-9171-406e-8f95-500dc792c129","Type":"ContainerStarted","Data":"8ff82a4ecfa86c418df2a18f6b126df98c448366fc43cc564faa612f22414d3f"} Jan 28 10:00:55 crc kubenswrapper[4735]: I0128 10:00:55.097375 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pgzlg" event={"ID":"430990db-5e89-4e89-86d6-529401e2b670","Type":"ContainerStarted","Data":"228da2febe52de96cf2c285542085214f4995e28254b27f3b32a7288fffcbda9"} Jan 28 10:00:55 crc kubenswrapper[4735]: I0128 10:00:55.097401 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pgzlg" event={"ID":"430990db-5e89-4e89-86d6-529401e2b670","Type":"ContainerStarted","Data":"23f2371d9dd2de74ba55a165045a73c684c5a788e8803966d30f5cd1700b4ea1"} Jan 28 10:00:55 crc kubenswrapper[4735]: I0128 10:00:55.119563 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5ghbh" event={"ID":"c0e8dd26-845d-43ff-bc2d-fd6f4808736f","Type":"ContainerStarted","Data":"d702f4adf90b079ba293e27d3fef3a968bd57daa1963ba58e74bdd88600bcc9b"} Jan 28 10:00:55 crc kubenswrapper[4735]: I0128 10:00:55.143608 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pgzlg" podStartSLOduration=2.143593581 podStartE2EDuration="2.143593581s" podCreationTimestamp="2026-01-28 10:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:00:55.13537634 +0000 UTC m=+1108.285040713" watchObservedRunningTime="2026-01-28 10:00:55.143593581 +0000 UTC m=+1108.293257954" Jan 28 10:00:55 crc kubenswrapper[4735]: I0128 10:00:55.230793 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-khsht"] Jan 28 10:00:55 crc kubenswrapper[4735]: W0128 10:00:55.237564 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1eab452a_3efa_4f9f_a03f_e0c6a3d8bd6c.slice/crio-8beede2516c30c42f042e4d5e0ea02425381614899dd962c7f570fd14319c29a WatchSource:0}: Error finding container 8beede2516c30c42f042e4d5e0ea02425381614899dd962c7f570fd14319c29a: Status 404 returned error can't find the container with id 8beede2516c30c42f042e4d5e0ea02425381614899dd962c7f570fd14319c29a Jan 28 10:00:55 crc kubenswrapper[4735]: I0128 10:00:55.239616 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:00:55 crc kubenswrapper[4735]: W0128 10:00:55.252136 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3242296_d82b_45e7_9d2b_c8f8c63ee2ca.slice/crio-e184ab7f80096e6cce59b9cebc724c723eb6b6a7116e3635ebea9f1935e1b121 WatchSource:0}: Error finding container e184ab7f80096e6cce59b9cebc724c723eb6b6a7116e3635ebea9f1935e1b121: Status 404 returned error can't find the container with id e184ab7f80096e6cce59b9cebc724c723eb6b6a7116e3635ebea9f1935e1b121 Jan 28 10:00:55 crc kubenswrapper[4735]: W0128 10:00:55.363571 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae47bfd0_190f_4281_849f_d6892f20036e.slice/crio-f38c54975f734a7e7b921239fcc967510cf54c0b38c79dee321c23e32ac471e4 WatchSource:0}: Error finding container f38c54975f734a7e7b921239fcc967510cf54c0b38c79dee321c23e32ac471e4: Status 404 returned error can't find the container with id f38c54975f734a7e7b921239fcc967510cf54c0b38c79dee321c23e32ac471e4 Jan 28 10:00:55 crc kubenswrapper[4735]: I0128 10:00:55.375889 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-647b785cdf-524zw"] Jan 28 10:00:55 crc kubenswrapper[4735]: I0128 10:00:55.460314 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-v5th8"] Jan 28 10:00:55 crc kubenswrapper[4735]: I0128 10:00:55.542085 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:00:55 crc kubenswrapper[4735]: W0128 10:00:55.548407 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod295ef421_0a10_42d7_9d24_6d98ac5d1ab9.slice/crio-bc0d54d84c32c7d1679003e55a986755386ad673a12da5408ae2da4f441ed968 WatchSource:0}: Error finding container bc0d54d84c32c7d1679003e55a986755386ad673a12da5408ae2da4f441ed968: Status 404 returned error can't find the container with id bc0d54d84c32c7d1679003e55a986755386ad673a12da5408ae2da4f441ed968 Jan 28 10:00:55 crc kubenswrapper[4735]: I0128 10:00:55.595617 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-vd94t"] Jan 28 10:00:55 crc kubenswrapper[4735]: I0128 10:00:55.743684 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:00:55 crc kubenswrapper[4735]: W0128 10:00:55.760525 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc57e8d73_db2b_4a13_8acc_625be70fcdce.slice/crio-c6425ea6038f121e96b319be59fca18eea7e2bf6694ef0137bd9af735bdf3b9a WatchSource:0}: Error finding container c6425ea6038f121e96b319be59fca18eea7e2bf6694ef0137bd9af735bdf3b9a: Status 404 returned error can't find the container with id c6425ea6038f121e96b319be59fca18eea7e2bf6694ef0137bd9af735bdf3b9a Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.138071 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v5th8" event={"ID":"1891b5eb-afe2-4e76-b3bd-2a141a784a13","Type":"ContainerStarted","Data":"28d588a3be18a35d482c5602b15182d6e91d45058afe6957f5db9b2af7932ec3"} Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.141171 4735 generic.go:334] "Generic (PLEG): container finished" podID="9260c92d-b507-4f85-8b91-6202f3c6e393" containerID="4b8c158fcc25457a26c59cb35aa99fc74ffdbef474a5acc50e8529a5b4ba25dd" exitCode=0 Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.141215 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" event={"ID":"9260c92d-b507-4f85-8b91-6202f3c6e393","Type":"ContainerDied","Data":"4b8c158fcc25457a26c59cb35aa99fc74ffdbef474a5acc50e8529a5b4ba25dd"} Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.141232 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" event={"ID":"9260c92d-b507-4f85-8b91-6202f3c6e393","Type":"ContainerStarted","Data":"43553ec3de921ed3b9b571f768d372dfaf8be08dd19553c3df4c833e18d84608"} Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.151566 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-647b785cdf-524zw" event={"ID":"ae47bfd0-190f-4281-849f-d6892f20036e","Type":"ContainerStarted","Data":"f38c54975f734a7e7b921239fcc967510cf54c0b38c79dee321c23e32ac471e4"} Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.155452 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"295ef421-0a10-42d7-9d24-6d98ac5d1ab9","Type":"ContainerStarted","Data":"bc0d54d84c32c7d1679003e55a986755386ad673a12da5408ae2da4f441ed968"} Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.167071 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5ghbh" event={"ID":"c0e8dd26-845d-43ff-bc2d-fd6f4808736f","Type":"ContainerStarted","Data":"ea3ddeb2c23eb87f52d1fe471220fe6e52abda2a3f03d74c722b495041206d49"} Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.173141 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca","Type":"ContainerStarted","Data":"e184ab7f80096e6cce59b9cebc724c723eb6b6a7116e3635ebea9f1935e1b121"} Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.175579 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c57e8d73-db2b-4a13-8acc-625be70fcdce","Type":"ContainerStarted","Data":"c6425ea6038f121e96b319be59fca18eea7e2bf6694ef0137bd9af735bdf3b9a"} Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.191029 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-khsht" event={"ID":"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c","Type":"ContainerStarted","Data":"8beede2516c30c42f042e4d5e0ea02425381614899dd962c7f570fd14319c29a"} Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.192640 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-5ghbh" podStartSLOduration=3.192573888 podStartE2EDuration="3.192573888s" podCreationTimestamp="2026-01-28 10:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:00:56.185387899 +0000 UTC m=+1109.335052292" watchObservedRunningTime="2026-01-28 10:00:56.192573888 +0000 UTC m=+1109.342238251" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.199345 4735 generic.go:334] "Generic (PLEG): container finished" podID="8189d350-9171-406e-8f95-500dc792c129" containerID="1c71a7d518111fda88bcc361088fbea8a819ed344791bd4da86df6cc1ea3fe65" exitCode=0 Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.199403 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" event={"ID":"8189d350-9171-406e-8f95-500dc792c129","Type":"ContainerDied","Data":"1c71a7d518111fda88bcc361088fbea8a819ed344791bd4da86df6cc1ea3fe65"} Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.632482 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.703388 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.741019 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.757440 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-647b785cdf-524zw"] Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.783670 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-ovsdbserver-nb\") pod \"8189d350-9171-406e-8f95-500dc792c129\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.783731 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-dns-svc\") pod \"8189d350-9171-406e-8f95-500dc792c129\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.783778 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-ovsdbserver-sb\") pod \"8189d350-9171-406e-8f95-500dc792c129\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.783840 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkl84\" (UniqueName: \"kubernetes.io/projected/8189d350-9171-406e-8f95-500dc792c129-kube-api-access-tkl84\") pod \"8189d350-9171-406e-8f95-500dc792c129\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.783926 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-dns-swift-storage-0\") pod \"8189d350-9171-406e-8f95-500dc792c129\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.783988 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-config\") pod \"8189d350-9171-406e-8f95-500dc792c129\" (UID: \"8189d350-9171-406e-8f95-500dc792c129\") " Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.809045 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-b9fc69d49-fmptv"] Jan 28 10:00:56 crc kubenswrapper[4735]: E0128 10:00:56.809443 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8189d350-9171-406e-8f95-500dc792c129" containerName="init" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.809455 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8189d350-9171-406e-8f95-500dc792c129" containerName="init" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.809641 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="8189d350-9171-406e-8f95-500dc792c129" containerName="init" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.822058 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.830337 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8189d350-9171-406e-8f95-500dc792c129-kube-api-access-tkl84" (OuterVolumeSpecName: "kube-api-access-tkl84") pod "8189d350-9171-406e-8f95-500dc792c129" (UID: "8189d350-9171-406e-8f95-500dc792c129"). InnerVolumeSpecName "kube-api-access-tkl84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.843630 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.854806 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b9fc69d49-fmptv"] Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.861854 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8189d350-9171-406e-8f95-500dc792c129" (UID: "8189d350-9171-406e-8f95-500dc792c129"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.888153 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-config" (OuterVolumeSpecName: "config") pod "8189d350-9171-406e-8f95-500dc792c129" (UID: "8189d350-9171-406e-8f95-500dc792c129"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.890478 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkl84\" (UniqueName: \"kubernetes.io/projected/8189d350-9171-406e-8f95-500dc792c129-kube-api-access-tkl84\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.890521 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.890533 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.891607 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8189d350-9171-406e-8f95-500dc792c129" (UID: "8189d350-9171-406e-8f95-500dc792c129"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.929411 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8189d350-9171-406e-8f95-500dc792c129" (UID: "8189d350-9171-406e-8f95-500dc792c129"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.936240 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8189d350-9171-406e-8f95-500dc792c129" (UID: "8189d350-9171-406e-8f95-500dc792c129"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.991429 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ea7d825-2458-4f96-b652-f51e66d6f0cd-logs\") pod \"horizon-b9fc69d49-fmptv\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.991466 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ea7d825-2458-4f96-b652-f51e66d6f0cd-config-data\") pod \"horizon-b9fc69d49-fmptv\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.991508 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbgvd\" (UniqueName: \"kubernetes.io/projected/7ea7d825-2458-4f96-b652-f51e66d6f0cd-kube-api-access-zbgvd\") pod \"horizon-b9fc69d49-fmptv\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.991676 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ea7d825-2458-4f96-b652-f51e66d6f0cd-scripts\") pod \"horizon-b9fc69d49-fmptv\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.991858 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7ea7d825-2458-4f96-b652-f51e66d6f0cd-horizon-secret-key\") pod \"horizon-b9fc69d49-fmptv\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.991946 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.991956 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:56 crc kubenswrapper[4735]: I0128 10:00:56.991966 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8189d350-9171-406e-8f95-500dc792c129-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.093818 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbgvd\" (UniqueName: \"kubernetes.io/projected/7ea7d825-2458-4f96-b652-f51e66d6f0cd-kube-api-access-zbgvd\") pod \"horizon-b9fc69d49-fmptv\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.093888 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ea7d825-2458-4f96-b652-f51e66d6f0cd-scripts\") pod \"horizon-b9fc69d49-fmptv\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.093954 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7ea7d825-2458-4f96-b652-f51e66d6f0cd-horizon-secret-key\") pod \"horizon-b9fc69d49-fmptv\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.094019 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ea7d825-2458-4f96-b652-f51e66d6f0cd-logs\") pod \"horizon-b9fc69d49-fmptv\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.094038 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ea7d825-2458-4f96-b652-f51e66d6f0cd-config-data\") pod \"horizon-b9fc69d49-fmptv\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.094541 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ea7d825-2458-4f96-b652-f51e66d6f0cd-logs\") pod \"horizon-b9fc69d49-fmptv\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.095389 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ea7d825-2458-4f96-b652-f51e66d6f0cd-scripts\") pod \"horizon-b9fc69d49-fmptv\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.095581 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ea7d825-2458-4f96-b652-f51e66d6f0cd-config-data\") pod \"horizon-b9fc69d49-fmptv\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.099686 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7ea7d825-2458-4f96-b652-f51e66d6f0cd-horizon-secret-key\") pod \"horizon-b9fc69d49-fmptv\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.112219 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbgvd\" (UniqueName: \"kubernetes.io/projected/7ea7d825-2458-4f96-b652-f51e66d6f0cd-kube-api-access-zbgvd\") pod \"horizon-b9fc69d49-fmptv\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.226837 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" event={"ID":"8189d350-9171-406e-8f95-500dc792c129","Type":"ContainerDied","Data":"8ff82a4ecfa86c418df2a18f6b126df98c448366fc43cc564faa612f22414d3f"} Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.226867 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-htq8v" Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.226889 4735 scope.go:117] "RemoveContainer" containerID="1c71a7d518111fda88bcc361088fbea8a819ed344791bd4da86df6cc1ea3fe65" Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.232960 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c57e8d73-db2b-4a13-8acc-625be70fcdce","Type":"ContainerStarted","Data":"e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93"} Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.244538 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" event={"ID":"9260c92d-b507-4f85-8b91-6202f3c6e393","Type":"ContainerStarted","Data":"dcc3d7b52e76dea3c5125ffc0fa92281a13ac24b5aeb0ee0332fdbbabacea92a"} Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.244601 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.249639 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"295ef421-0a10-42d7-9d24-6d98ac5d1ab9","Type":"ContainerStarted","Data":"f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b"} Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.265798 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.285312 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-htq8v"] Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.288835 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-htq8v"] Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.297327 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" podStartSLOduration=3.297307496 podStartE2EDuration="3.297307496s" podCreationTimestamp="2026-01-28 10:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:00:57.294050965 +0000 UTC m=+1110.443715338" watchObservedRunningTime="2026-01-28 10:00:57.297307496 +0000 UTC m=+1110.446971869" Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.590914 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8189d350-9171-406e-8f95-500dc792c129" path="/var/lib/kubelet/pods/8189d350-9171-406e-8f95-500dc792c129/volumes" Jan 28 10:00:57 crc kubenswrapper[4735]: I0128 10:00:57.870524 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b9fc69d49-fmptv"] Jan 28 10:00:57 crc kubenswrapper[4735]: W0128 10:00:57.926256 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea7d825_2458_4f96_b652_f51e66d6f0cd.slice/crio-7ece4de6735a79f82b0e497d1e79b5155c93b9ec8ec626c663f01f809e812739 WatchSource:0}: Error finding container 7ece4de6735a79f82b0e497d1e79b5155c93b9ec8ec626c663f01f809e812739: Status 404 returned error can't find the container with id 7ece4de6735a79f82b0e497d1e79b5155c93b9ec8ec626c663f01f809e812739 Jan 28 10:00:58 crc kubenswrapper[4735]: I0128 10:00:58.260558 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c57e8d73-db2b-4a13-8acc-625be70fcdce","Type":"ContainerStarted","Data":"db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be"} Jan 28 10:00:58 crc kubenswrapper[4735]: I0128 10:00:58.260599 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c57e8d73-db2b-4a13-8acc-625be70fcdce" containerName="glance-log" containerID="cri-o://e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93" gracePeriod=30 Jan 28 10:00:58 crc kubenswrapper[4735]: I0128 10:00:58.260948 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c57e8d73-db2b-4a13-8acc-625be70fcdce" containerName="glance-httpd" containerID="cri-o://db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be" gracePeriod=30 Jan 28 10:00:58 crc kubenswrapper[4735]: I0128 10:00:58.265474 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="295ef421-0a10-42d7-9d24-6d98ac5d1ab9" containerName="glance-log" containerID="cri-o://f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b" gracePeriod=30 Jan 28 10:00:58 crc kubenswrapper[4735]: I0128 10:00:58.265576 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="295ef421-0a10-42d7-9d24-6d98ac5d1ab9" containerName="glance-httpd" containerID="cri-o://f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9" gracePeriod=30 Jan 28 10:00:58 crc kubenswrapper[4735]: I0128 10:00:58.265621 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"295ef421-0a10-42d7-9d24-6d98ac5d1ab9","Type":"ContainerStarted","Data":"f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9"} Jan 28 10:00:58 crc kubenswrapper[4735]: I0128 10:00:58.273841 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b9fc69d49-fmptv" event={"ID":"7ea7d825-2458-4f96-b652-f51e66d6f0cd","Type":"ContainerStarted","Data":"7ece4de6735a79f82b0e497d1e79b5155c93b9ec8ec626c663f01f809e812739"} Jan 28 10:00:58 crc kubenswrapper[4735]: I0128 10:00:58.286999 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.286979242 podStartE2EDuration="4.286979242s" podCreationTimestamp="2026-01-28 10:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:00:58.280189442 +0000 UTC m=+1111.429853815" watchObservedRunningTime="2026-01-28 10:00:58.286979242 +0000 UTC m=+1111.436643615" Jan 28 10:00:58 crc kubenswrapper[4735]: I0128 10:00:58.305409 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.305390979 podStartE2EDuration="4.305390979s" podCreationTimestamp="2026-01-28 10:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:00:58.300833118 +0000 UTC m=+1111.450497501" watchObservedRunningTime="2026-01-28 10:00:58.305390979 +0000 UTC m=+1111.455055342" Jan 28 10:00:58 crc kubenswrapper[4735]: I0128 10:00:58.996248 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.149797 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.182675 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-httpd-run\") pod \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.182733 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-logs\") pod \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.182795 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.182823 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-public-tls-certs\") pod \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.182881 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-scripts\") pod \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.182901 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-combined-ca-bundle\") pod \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.182959 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-config-data\") pod \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.182977 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfn7d\" (UniqueName: \"kubernetes.io/projected/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-kube-api-access-nfn7d\") pod \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\" (UID: \"295ef421-0a10-42d7-9d24-6d98ac5d1ab9\") " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.187377 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-logs" (OuterVolumeSpecName: "logs") pod "295ef421-0a10-42d7-9d24-6d98ac5d1ab9" (UID: "295ef421-0a10-42d7-9d24-6d98ac5d1ab9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.187698 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "295ef421-0a10-42d7-9d24-6d98ac5d1ab9" (UID: "295ef421-0a10-42d7-9d24-6d98ac5d1ab9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.192011 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "295ef421-0a10-42d7-9d24-6d98ac5d1ab9" (UID: "295ef421-0a10-42d7-9d24-6d98ac5d1ab9"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.192318 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-scripts" (OuterVolumeSpecName: "scripts") pod "295ef421-0a10-42d7-9d24-6d98ac5d1ab9" (UID: "295ef421-0a10-42d7-9d24-6d98ac5d1ab9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.194156 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-kube-api-access-nfn7d" (OuterVolumeSpecName: "kube-api-access-nfn7d") pod "295ef421-0a10-42d7-9d24-6d98ac5d1ab9" (UID: "295ef421-0a10-42d7-9d24-6d98ac5d1ab9"). InnerVolumeSpecName "kube-api-access-nfn7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.212287 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "295ef421-0a10-42d7-9d24-6d98ac5d1ab9" (UID: "295ef421-0a10-42d7-9d24-6d98ac5d1ab9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.236943 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-config-data" (OuterVolumeSpecName: "config-data") pod "295ef421-0a10-42d7-9d24-6d98ac5d1ab9" (UID: "295ef421-0a10-42d7-9d24-6d98ac5d1ab9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.270419 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "295ef421-0a10-42d7-9d24-6d98ac5d1ab9" (UID: "295ef421-0a10-42d7-9d24-6d98ac5d1ab9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.284665 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-scripts\") pod \"c57e8d73-db2b-4a13-8acc-625be70fcdce\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.284713 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c57e8d73-db2b-4a13-8acc-625be70fcdce-logs\") pod \"c57e8d73-db2b-4a13-8acc-625be70fcdce\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.284840 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps9fx\" (UniqueName: \"kubernetes.io/projected/c57e8d73-db2b-4a13-8acc-625be70fcdce-kube-api-access-ps9fx\") pod \"c57e8d73-db2b-4a13-8acc-625be70fcdce\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.284867 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-internal-tls-certs\") pod \"c57e8d73-db2b-4a13-8acc-625be70fcdce\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.284892 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-combined-ca-bundle\") pod \"c57e8d73-db2b-4a13-8acc-625be70fcdce\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.284974 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"c57e8d73-db2b-4a13-8acc-625be70fcdce\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.285002 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-config-data\") pod \"c57e8d73-db2b-4a13-8acc-625be70fcdce\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.285044 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c57e8d73-db2b-4a13-8acc-625be70fcdce-httpd-run\") pod \"c57e8d73-db2b-4a13-8acc-625be70fcdce\" (UID: \"c57e8d73-db2b-4a13-8acc-625be70fcdce\") " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.285365 4735 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.285382 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.285402 4735 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.285410 4735 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.285419 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.285428 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.285436 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.285445 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfn7d\" (UniqueName: \"kubernetes.io/projected/295ef421-0a10-42d7-9d24-6d98ac5d1ab9-kube-api-access-nfn7d\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.286661 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c57e8d73-db2b-4a13-8acc-625be70fcdce-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c57e8d73-db2b-4a13-8acc-625be70fcdce" (UID: "c57e8d73-db2b-4a13-8acc-625be70fcdce"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.287584 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c57e8d73-db2b-4a13-8acc-625be70fcdce-logs" (OuterVolumeSpecName: "logs") pod "c57e8d73-db2b-4a13-8acc-625be70fcdce" (UID: "c57e8d73-db2b-4a13-8acc-625be70fcdce"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.289395 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "c57e8d73-db2b-4a13-8acc-625be70fcdce" (UID: "c57e8d73-db2b-4a13-8acc-625be70fcdce"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.289579 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-scripts" (OuterVolumeSpecName: "scripts") pod "c57e8d73-db2b-4a13-8acc-625be70fcdce" (UID: "c57e8d73-db2b-4a13-8acc-625be70fcdce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.294066 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c57e8d73-db2b-4a13-8acc-625be70fcdce-kube-api-access-ps9fx" (OuterVolumeSpecName: "kube-api-access-ps9fx") pod "c57e8d73-db2b-4a13-8acc-625be70fcdce" (UID: "c57e8d73-db2b-4a13-8acc-625be70fcdce"). InnerVolumeSpecName "kube-api-access-ps9fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.307610 4735 generic.go:334] "Generic (PLEG): container finished" podID="c57e8d73-db2b-4a13-8acc-625be70fcdce" containerID="db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be" exitCode=0 Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.307645 4735 generic.go:334] "Generic (PLEG): container finished" podID="c57e8d73-db2b-4a13-8acc-625be70fcdce" containerID="e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93" exitCode=143 Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.307691 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c57e8d73-db2b-4a13-8acc-625be70fcdce","Type":"ContainerDied","Data":"db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be"} Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.307720 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c57e8d73-db2b-4a13-8acc-625be70fcdce","Type":"ContainerDied","Data":"e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93"} Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.307731 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c57e8d73-db2b-4a13-8acc-625be70fcdce","Type":"ContainerDied","Data":"c6425ea6038f121e96b319be59fca18eea7e2bf6694ef0137bd9af735bdf3b9a"} Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.307773 4735 scope.go:117] "RemoveContainer" containerID="db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.307868 4735 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.307928 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.319606 4735 generic.go:334] "Generic (PLEG): container finished" podID="295ef421-0a10-42d7-9d24-6d98ac5d1ab9" containerID="f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9" exitCode=0 Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.319649 4735 generic.go:334] "Generic (PLEG): container finished" podID="295ef421-0a10-42d7-9d24-6d98ac5d1ab9" containerID="f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b" exitCode=143 Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.319654 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.319665 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"295ef421-0a10-42d7-9d24-6d98ac5d1ab9","Type":"ContainerDied","Data":"f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9"} Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.319705 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"295ef421-0a10-42d7-9d24-6d98ac5d1ab9","Type":"ContainerDied","Data":"f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b"} Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.319722 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"295ef421-0a10-42d7-9d24-6d98ac5d1ab9","Type":"ContainerDied","Data":"bc0d54d84c32c7d1679003e55a986755386ad673a12da5408ae2da4f441ed968"} Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.331818 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c57e8d73-db2b-4a13-8acc-625be70fcdce" (UID: "c57e8d73-db2b-4a13-8acc-625be70fcdce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.365253 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c57e8d73-db2b-4a13-8acc-625be70fcdce" (UID: "c57e8d73-db2b-4a13-8acc-625be70fcdce"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.371995 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-config-data" (OuterVolumeSpecName: "config-data") pod "c57e8d73-db2b-4a13-8acc-625be70fcdce" (UID: "c57e8d73-db2b-4a13-8acc-625be70fcdce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.387363 4735 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.387440 4735 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.387455 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.387467 4735 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c57e8d73-db2b-4a13-8acc-625be70fcdce-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.387483 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.387496 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c57e8d73-db2b-4a13-8acc-625be70fcdce-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.387510 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps9fx\" (UniqueName: \"kubernetes.io/projected/c57e8d73-db2b-4a13-8acc-625be70fcdce-kube-api-access-ps9fx\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.387528 4735 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.387544 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57e8d73-db2b-4a13-8acc-625be70fcdce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.406770 4735 scope.go:117] "RemoveContainer" containerID="e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.410588 4735 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.430891 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.439218 4735 scope.go:117] "RemoveContainer" containerID="db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be" Jan 28 10:00:59 crc kubenswrapper[4735]: E0128 10:00:59.449228 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be\": container with ID starting with db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be not found: ID does not exist" containerID="db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.449270 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be"} err="failed to get container status \"db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be\": rpc error: code = NotFound desc = could not find container \"db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be\": container with ID starting with db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be not found: ID does not exist" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.449296 4735 scope.go:117] "RemoveContainer" containerID="e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93" Jan 28 10:00:59 crc kubenswrapper[4735]: E0128 10:00:59.449600 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93\": container with ID starting with e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93 not found: ID does not exist" containerID="e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.449625 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93"} err="failed to get container status \"e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93\": rpc error: code = NotFound desc = could not find container \"e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93\": container with ID starting with e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93 not found: ID does not exist" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.449645 4735 scope.go:117] "RemoveContainer" containerID="db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.449907 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be"} err="failed to get container status \"db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be\": rpc error: code = NotFound desc = could not find container \"db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be\": container with ID starting with db6e2dd56889549414b6fdd4a14c7f09fb709888729632698e294c0ef57c97be not found: ID does not exist" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.449926 4735 scope.go:117] "RemoveContainer" containerID="e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.450283 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93"} err="failed to get container status \"e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93\": rpc error: code = NotFound desc = could not find container \"e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93\": container with ID starting with e8418555af7ea990c531401ef5d384ea4864d978d7beb98964d03e4a022f1d93 not found: ID does not exist" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.450340 4735 scope.go:117] "RemoveContainer" containerID="f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.454011 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.484043 4735 scope.go:117] "RemoveContainer" containerID="f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.488576 4735 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.496660 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:00:59 crc kubenswrapper[4735]: E0128 10:00:59.497227 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57e8d73-db2b-4a13-8acc-625be70fcdce" containerName="glance-httpd" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.497252 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57e8d73-db2b-4a13-8acc-625be70fcdce" containerName="glance-httpd" Jan 28 10:00:59 crc kubenswrapper[4735]: E0128 10:00:59.497266 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="295ef421-0a10-42d7-9d24-6d98ac5d1ab9" containerName="glance-log" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.497274 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="295ef421-0a10-42d7-9d24-6d98ac5d1ab9" containerName="glance-log" Jan 28 10:00:59 crc kubenswrapper[4735]: E0128 10:00:59.497286 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57e8d73-db2b-4a13-8acc-625be70fcdce" containerName="glance-log" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.497293 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57e8d73-db2b-4a13-8acc-625be70fcdce" containerName="glance-log" Jan 28 10:00:59 crc kubenswrapper[4735]: E0128 10:00:59.497311 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="295ef421-0a10-42d7-9d24-6d98ac5d1ab9" containerName="glance-httpd" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.497318 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="295ef421-0a10-42d7-9d24-6d98ac5d1ab9" containerName="glance-httpd" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.497526 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="c57e8d73-db2b-4a13-8acc-625be70fcdce" containerName="glance-log" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.497541 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="295ef421-0a10-42d7-9d24-6d98ac5d1ab9" containerName="glance-log" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.497554 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="c57e8d73-db2b-4a13-8acc-625be70fcdce" containerName="glance-httpd" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.497568 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="295ef421-0a10-42d7-9d24-6d98ac5d1ab9" containerName="glance-httpd" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.498679 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.501556 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.501791 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.516710 4735 scope.go:117] "RemoveContainer" containerID="f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9" Jan 28 10:00:59 crc kubenswrapper[4735]: E0128 10:00:59.521964 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9\": container with ID starting with f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9 not found: ID does not exist" containerID="f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.522043 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9"} err="failed to get container status \"f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9\": rpc error: code = NotFound desc = could not find container \"f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9\": container with ID starting with f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9 not found: ID does not exist" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.522090 4735 scope.go:117] "RemoveContainer" containerID="f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b" Jan 28 10:00:59 crc kubenswrapper[4735]: E0128 10:00:59.523374 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b\": container with ID starting with f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b not found: ID does not exist" containerID="f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.523411 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b"} err="failed to get container status \"f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b\": rpc error: code = NotFound desc = could not find container \"f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b\": container with ID starting with f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b not found: ID does not exist" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.523434 4735 scope.go:117] "RemoveContainer" containerID="f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.526218 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9"} err="failed to get container status \"f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9\": rpc error: code = NotFound desc = could not find container \"f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9\": container with ID starting with f6b8d90648e91d4a53fa648ca5be14019b3591912a02ab6d68bee6983a1454f9 not found: ID does not exist" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.526238 4735 scope.go:117] "RemoveContainer" containerID="f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.526625 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b"} err="failed to get container status \"f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b\": rpc error: code = NotFound desc = could not find container \"f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b\": container with ID starting with f9d22241e9da732836c353708f6a231f3251aa686d1d44669ee97189687a791b not found: ID does not exist" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.544590 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="295ef421-0a10-42d7-9d24-6d98ac5d1ab9" path="/var/lib/kubelet/pods/295ef421-0a10-42d7-9d24-6d98ac5d1ab9/volumes" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.556932 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.662448 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.677246 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.684000 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.686100 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.688133 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.688303 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.691022 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.693095 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-scripts\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.693135 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.693159 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/568b439d-be34-4947-a976-27d1410320f4-logs\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.693188 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/568b439d-be34-4947-a976-27d1410320f4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.693205 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjnfh\" (UniqueName: \"kubernetes.io/projected/568b439d-be34-4947-a976-27d1410320f4-kube-api-access-sjnfh\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.693261 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.693296 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.693324 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-config-data\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.794622 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/568b439d-be34-4947-a976-27d1410320f4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.794660 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjnfh\" (UniqueName: \"kubernetes.io/projected/568b439d-be34-4947-a976-27d1410320f4-kube-api-access-sjnfh\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.794720 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.794740 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.794777 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.794805 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.794823 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-logs\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.794981 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.795009 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glhdl\" (UniqueName: \"kubernetes.io/projected/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-kube-api-access-glhdl\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.795030 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.795049 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-config-data\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.795089 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.795108 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-scripts\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.795138 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.795159 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/568b439d-be34-4947-a976-27d1410320f4-logs\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.795180 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.795691 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/568b439d-be34-4947-a976-27d1410320f4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.797206 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.797393 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/568b439d-be34-4947-a976-27d1410320f4-logs\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.799719 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-scripts\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.801062 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-config-data\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.802413 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.808459 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.814453 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjnfh\" (UniqueName: \"kubernetes.io/projected/568b439d-be34-4947-a976-27d1410320f4-kube-api-access-sjnfh\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.833546 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " pod="openstack/glance-default-external-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.896536 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.896628 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.896657 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.896688 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.896861 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.897129 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-logs\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.897216 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.897247 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glhdl\" (UniqueName: \"kubernetes.io/projected/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-kube-api-access-glhdl\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.897302 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.897456 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.897544 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-logs\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.902767 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.903084 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.910954 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.912064 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.914828 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glhdl\" (UniqueName: \"kubernetes.io/projected/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-kube-api-access-glhdl\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:00:59 crc kubenswrapper[4735]: I0128 10:00:59.940475 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:00 crc kubenswrapper[4735]: I0128 10:01:00.019222 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 10:01:00 crc kubenswrapper[4735]: I0128 10:01:00.134886 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 10:01:00 crc kubenswrapper[4735]: I0128 10:01:00.338176 4735 generic.go:334] "Generic (PLEG): container finished" podID="430990db-5e89-4e89-86d6-529401e2b670" containerID="228da2febe52de96cf2c285542085214f4995e28254b27f3b32a7288fffcbda9" exitCode=0 Jan 28 10:01:00 crc kubenswrapper[4735]: I0128 10:01:00.338228 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pgzlg" event={"ID":"430990db-5e89-4e89-86d6-529401e2b670","Type":"ContainerDied","Data":"228da2febe52de96cf2c285542085214f4995e28254b27f3b32a7288fffcbda9"} Jan 28 10:01:01 crc kubenswrapper[4735]: I0128 10:01:01.542584 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c57e8d73-db2b-4a13-8acc-625be70fcdce" path="/var/lib/kubelet/pods/c57e8d73-db2b-4a13-8acc-625be70fcdce/volumes" Jan 28 10:01:01 crc kubenswrapper[4735]: I0128 10:01:01.989667 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-86f45b6977-b8wpp"] Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.035556 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-bdcb89768-499m5"] Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.036905 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.043143 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.096041 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bdcb89768-499m5"] Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.126605 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.136529 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b9fc69d49-fmptv"] Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.139669 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/160ae085-44c0-4cef-9854-b0686547b43c-logs\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.139730 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-combined-ca-bundle\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.139786 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/160ae085-44c0-4cef-9854-b0686547b43c-scripts\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.139828 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkd2x\" (UniqueName: \"kubernetes.io/projected/160ae085-44c0-4cef-9854-b0686547b43c-kube-api-access-kkd2x\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.139861 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/160ae085-44c0-4cef-9854-b0686547b43c-config-data\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.139911 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-horizon-tls-certs\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.139985 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-horizon-secret-key\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.173266 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-55bcb7d5f8-xkjl6"] Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.177533 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.180213 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55bcb7d5f8-xkjl6"] Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.186597 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.241014 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-horizon-tls-certs\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.241078 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/160ae085-44c0-4cef-9854-b0686547b43c-logs\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.241105 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-combined-ca-bundle\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.241130 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-horizon-secret-key\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.241149 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-scripts\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.241168 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/160ae085-44c0-4cef-9854-b0686547b43c-scripts\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.241192 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-logs\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.241210 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkd2x\" (UniqueName: \"kubernetes.io/projected/160ae085-44c0-4cef-9854-b0686547b43c-kube-api-access-kkd2x\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.241233 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-config-data\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.241263 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/160ae085-44c0-4cef-9854-b0686547b43c-config-data\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.241293 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhj69\" (UniqueName: \"kubernetes.io/projected/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-kube-api-access-rhj69\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.241319 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-horizon-tls-certs\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.241345 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-combined-ca-bundle\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.241394 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-horizon-secret-key\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.243245 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/160ae085-44c0-4cef-9854-b0686547b43c-config-data\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.243900 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/160ae085-44c0-4cef-9854-b0686547b43c-scripts\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.244054 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/160ae085-44c0-4cef-9854-b0686547b43c-logs\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.247683 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-combined-ca-bundle\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.249646 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-horizon-secret-key\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.258632 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-horizon-tls-certs\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.262422 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkd2x\" (UniqueName: \"kubernetes.io/projected/160ae085-44c0-4cef-9854-b0686547b43c-kube-api-access-kkd2x\") pod \"horizon-bdcb89768-499m5\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.343166 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-logs\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.343271 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-config-data\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.343343 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhj69\" (UniqueName: \"kubernetes.io/projected/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-kube-api-access-rhj69\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.343417 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-combined-ca-bundle\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.343501 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-logs\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.343904 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-horizon-tls-certs\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.344008 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-horizon-secret-key\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.344031 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-scripts\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.344547 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-scripts\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.344820 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-config-data\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.349114 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-combined-ca-bundle\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.349307 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-horizon-tls-certs\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.349332 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-horizon-secret-key\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.360301 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhj69\" (UniqueName: \"kubernetes.io/projected/9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79-kube-api-access-rhj69\") pod \"horizon-55bcb7d5f8-xkjl6\" (UID: \"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79\") " pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.395644 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:02 crc kubenswrapper[4735]: I0128 10:01:02.508379 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.366714 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pgzlg" event={"ID":"430990db-5e89-4e89-86d6-529401e2b670","Type":"ContainerDied","Data":"23f2371d9dd2de74ba55a165045a73c684c5a788e8803966d30f5cd1700b4ea1"} Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.367167 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23f2371d9dd2de74ba55a165045a73c684c5a788e8803966d30f5cd1700b4ea1" Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.393435 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.461717 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-scripts\") pod \"430990db-5e89-4e89-86d6-529401e2b670\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.462020 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-combined-ca-bundle\") pod \"430990db-5e89-4e89-86d6-529401e2b670\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.462158 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-fernet-keys\") pod \"430990db-5e89-4e89-86d6-529401e2b670\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.462450 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvnq8\" (UniqueName: \"kubernetes.io/projected/430990db-5e89-4e89-86d6-529401e2b670-kube-api-access-dvnq8\") pod \"430990db-5e89-4e89-86d6-529401e2b670\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.462486 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-credential-keys\") pod \"430990db-5e89-4e89-86d6-529401e2b670\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.462524 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-config-data\") pod \"430990db-5e89-4e89-86d6-529401e2b670\" (UID: \"430990db-5e89-4e89-86d6-529401e2b670\") " Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.467230 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "430990db-5e89-4e89-86d6-529401e2b670" (UID: "430990db-5e89-4e89-86d6-529401e2b670"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.467472 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "430990db-5e89-4e89-86d6-529401e2b670" (UID: "430990db-5e89-4e89-86d6-529401e2b670"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.469001 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-scripts" (OuterVolumeSpecName: "scripts") pod "430990db-5e89-4e89-86d6-529401e2b670" (UID: "430990db-5e89-4e89-86d6-529401e2b670"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.469633 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/430990db-5e89-4e89-86d6-529401e2b670-kube-api-access-dvnq8" (OuterVolumeSpecName: "kube-api-access-dvnq8") pod "430990db-5e89-4e89-86d6-529401e2b670" (UID: "430990db-5e89-4e89-86d6-529401e2b670"). InnerVolumeSpecName "kube-api-access-dvnq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.493188 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-config-data" (OuterVolumeSpecName: "config-data") pod "430990db-5e89-4e89-86d6-529401e2b670" (UID: "430990db-5e89-4e89-86d6-529401e2b670"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.498533 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "430990db-5e89-4e89-86d6-529401e2b670" (UID: "430990db-5e89-4e89-86d6-529401e2b670"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.565301 4735 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.565332 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvnq8\" (UniqueName: \"kubernetes.io/projected/430990db-5e89-4e89-86d6-529401e2b670-kube-api-access-dvnq8\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.565342 4735 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.565350 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.565357 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:03 crc kubenswrapper[4735]: I0128 10:01:03.565365 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430990db-5e89-4e89-86d6-529401e2b670-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.374154 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pgzlg" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.490175 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pgzlg"] Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.497509 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pgzlg"] Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.576955 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2vhll"] Jan 28 10:01:04 crc kubenswrapper[4735]: E0128 10:01:04.577329 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430990db-5e89-4e89-86d6-529401e2b670" containerName="keystone-bootstrap" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.577345 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="430990db-5e89-4e89-86d6-529401e2b670" containerName="keystone-bootstrap" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.577495 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="430990db-5e89-4e89-86d6-529401e2b670" containerName="keystone-bootstrap" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.578085 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.580134 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pnjpl" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.580175 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.580187 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.584326 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.586737 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.602287 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2vhll"] Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.692864 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-credential-keys\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.693073 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-config-data\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.693265 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6qwl\" (UniqueName: \"kubernetes.io/projected/7679ac38-8750-48e8-9f60-0c7214eef680-kube-api-access-d6qwl\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.693392 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-scripts\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.693448 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-fernet-keys\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.693534 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-combined-ca-bundle\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.795365 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-fernet-keys\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.795447 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-combined-ca-bundle\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.795519 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-credential-keys\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.795563 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-config-data\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.795611 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6qwl\" (UniqueName: \"kubernetes.io/projected/7679ac38-8750-48e8-9f60-0c7214eef680-kube-api-access-d6qwl\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.795678 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-scripts\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.801247 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-combined-ca-bundle\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.801469 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-config-data\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.802020 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-scripts\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.802323 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-credential-keys\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.811904 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6qwl\" (UniqueName: \"kubernetes.io/projected/7679ac38-8750-48e8-9f60-0c7214eef680-kube-api-access-d6qwl\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.812585 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-fernet-keys\") pod \"keystone-bootstrap-2vhll\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.905026 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.905310 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.961610 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-w2t8m"] Jan 28 10:01:04 crc kubenswrapper[4735]: I0128 10:01:04.961848 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" podUID="b8117919-6bc1-449e-b3b5-668a44463190" containerName="dnsmasq-dns" containerID="cri-o://e125a8a7b38b5cee9070bb72b8256531f85f9205068731bcb8473f3617d1b708" gracePeriod=10 Jan 28 10:01:05 crc kubenswrapper[4735]: I0128 10:01:05.383513 4735 generic.go:334] "Generic (PLEG): container finished" podID="b8117919-6bc1-449e-b3b5-668a44463190" containerID="e125a8a7b38b5cee9070bb72b8256531f85f9205068731bcb8473f3617d1b708" exitCode=0 Jan 28 10:01:05 crc kubenswrapper[4735]: I0128 10:01:05.384168 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" event={"ID":"b8117919-6bc1-449e-b3b5-668a44463190","Type":"ContainerDied","Data":"e125a8a7b38b5cee9070bb72b8256531f85f9205068731bcb8473f3617d1b708"} Jan 28 10:01:05 crc kubenswrapper[4735]: I0128 10:01:05.430113 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:01:05 crc kubenswrapper[4735]: I0128 10:01:05.430166 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:01:05 crc kubenswrapper[4735]: I0128 10:01:05.511498 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="430990db-5e89-4e89-86d6-529401e2b670" path="/var/lib/kubelet/pods/430990db-5e89-4e89-86d6-529401e2b670/volumes" Jan 28 10:01:09 crc kubenswrapper[4735]: I0128 10:01:09.545107 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" podUID="b8117919-6bc1-449e-b3b5-668a44463190" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Jan 28 10:01:14 crc kubenswrapper[4735]: I0128 10:01:14.545459 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" podUID="b8117919-6bc1-449e-b3b5-668a44463190" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Jan 28 10:01:14 crc kubenswrapper[4735]: E0128 10:01:14.740547 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 28 10:01:14 crc kubenswrapper[4735]: E0128 10:01:14.740726 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n57dh59bh5c5h5d7h675h596h7bh5dbh679h9ch57h659h695h7fh686h8bh65fhbh74h6ch589h5b7h649hbbh58chbch667hbbh55h5bfh5d9h598q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nztc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-647b785cdf-524zw_openstack(ae47bfd0-190f-4281-849f-d6892f20036e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 10:01:14 crc kubenswrapper[4735]: E0128 10:01:14.746042 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-647b785cdf-524zw" podUID="ae47bfd0-190f-4281-849f-d6892f20036e" Jan 28 10:01:14 crc kubenswrapper[4735]: E0128 10:01:14.756841 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 28 10:01:14 crc kubenswrapper[4735]: E0128 10:01:14.756984 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ncdh5ch7fh696h57fhf4h78h668h55ch56ch568h59fh5b9h86h644hch676h5b7h575h679h5ch564h587h689h598h7dh97h5f8h565h568h5dch5dfq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6b5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-86f45b6977-b8wpp_openstack(372d90e2-29ec-4a20-9a36-8073b54ea7b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 10:01:14 crc kubenswrapper[4735]: E0128 10:01:14.759701 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-86f45b6977-b8wpp" podUID="372d90e2-29ec-4a20-9a36-8073b54ea7b0" Jan 28 10:01:15 crc kubenswrapper[4735]: E0128 10:01:15.057993 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 28 10:01:15 crc kubenswrapper[4735]: E0128 10:01:15.058200 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9ch648h55bh698hf5h8dh65ch67fh67fh58h5bbhfch6h575hc8h5d5hc7h646h5fdh684h65h679h574h75hcbh595h65h4hdbh685h79h654q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98k45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f3242296-d82b-45e7-9d2b-c8f8c63ee2ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 10:01:15 crc kubenswrapper[4735]: E0128 10:01:15.060984 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 28 10:01:15 crc kubenswrapper[4735]: E0128 10:01:15.061138 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n556hc7h6h54dh5f4hd7h567h66h559hd6h694h5bdh668hc4h588h644h9fhdbh559hd9h679h559h586h544h5d9h587hfdhcch7ch6h4h55cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbgvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-b9fc69d49-fmptv_openstack(7ea7d825-2458-4f96-b652-f51e66d6f0cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 10:01:15 crc kubenswrapper[4735]: E0128 10:01:15.066963 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-b9fc69d49-fmptv" podUID="7ea7d825-2458-4f96-b652-f51e66d6f0cd" Jan 28 10:01:15 crc kubenswrapper[4735]: I0128 10:01:15.648184 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:01:17 crc kubenswrapper[4735]: I0128 10:01:17.503512 4735 generic.go:334] "Generic (PLEG): container finished" podID="c0e8dd26-845d-43ff-bc2d-fd6f4808736f" containerID="ea3ddeb2c23eb87f52d1fe471220fe6e52abda2a3f03d74c722b495041206d49" exitCode=0 Jan 28 10:01:17 crc kubenswrapper[4735]: I0128 10:01:17.512685 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5ghbh" event={"ID":"c0e8dd26-845d-43ff-bc2d-fd6f4808736f","Type":"ContainerDied","Data":"ea3ddeb2c23eb87f52d1fe471220fe6e52abda2a3f03d74c722b495041206d49"} Jan 28 10:01:23 crc kubenswrapper[4735]: E0128 10:01:23.784961 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 28 10:01:23 crc kubenswrapper[4735]: E0128 10:01:23.785460 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86jk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-khsht_openstack(1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 10:01:23 crc kubenswrapper[4735]: E0128 10:01:23.786887 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-khsht" podUID="1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c" Jan 28 10:01:23 crc kubenswrapper[4735]: I0128 10:01:23.959857 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:01:23 crc kubenswrapper[4735]: I0128 10:01:23.961313 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:01:23 crc kubenswrapper[4735]: I0128 10:01:23.976938 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:01:23 crc kubenswrapper[4735]: I0128 10:01:23.996435 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:01:23 crc kubenswrapper[4735]: I0128 10:01:23.996837 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5ghbh" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.033857 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/372d90e2-29ec-4a20-9a36-8073b54ea7b0-logs\") pod \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.033899 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7ea7d825-2458-4f96-b652-f51e66d6f0cd-horizon-secret-key\") pod \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.033940 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ea7d825-2458-4f96-b652-f51e66d6f0cd-logs\") pod \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.033956 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ea7d825-2458-4f96-b652-f51e66d6f0cd-config-data\") pod \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.034012 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/372d90e2-29ec-4a20-9a36-8073b54ea7b0-horizon-secret-key\") pod \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.034027 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/372d90e2-29ec-4a20-9a36-8073b54ea7b0-config-data\") pod \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.034061 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6b5j\" (UniqueName: \"kubernetes.io/projected/372d90e2-29ec-4a20-9a36-8073b54ea7b0-kube-api-access-d6b5j\") pod \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.034153 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbgvd\" (UniqueName: \"kubernetes.io/projected/7ea7d825-2458-4f96-b652-f51e66d6f0cd-kube-api-access-zbgvd\") pod \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.034187 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/372d90e2-29ec-4a20-9a36-8073b54ea7b0-scripts\") pod \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\" (UID: \"372d90e2-29ec-4a20-9a36-8073b54ea7b0\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.034249 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ea7d825-2458-4f96-b652-f51e66d6f0cd-scripts\") pod \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\" (UID: \"7ea7d825-2458-4f96-b652-f51e66d6f0cd\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.035691 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ea7d825-2458-4f96-b652-f51e66d6f0cd-scripts" (OuterVolumeSpecName: "scripts") pod "7ea7d825-2458-4f96-b652-f51e66d6f0cd" (UID: "7ea7d825-2458-4f96-b652-f51e66d6f0cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.035973 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372d90e2-29ec-4a20-9a36-8073b54ea7b0-logs" (OuterVolumeSpecName: "logs") pod "372d90e2-29ec-4a20-9a36-8073b54ea7b0" (UID: "372d90e2-29ec-4a20-9a36-8073b54ea7b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.044045 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372d90e2-29ec-4a20-9a36-8073b54ea7b0-config-data" (OuterVolumeSpecName: "config-data") pod "372d90e2-29ec-4a20-9a36-8073b54ea7b0" (UID: "372d90e2-29ec-4a20-9a36-8073b54ea7b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.044545 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ea7d825-2458-4f96-b652-f51e66d6f0cd-logs" (OuterVolumeSpecName: "logs") pod "7ea7d825-2458-4f96-b652-f51e66d6f0cd" (UID: "7ea7d825-2458-4f96-b652-f51e66d6f0cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.045143 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ea7d825-2458-4f96-b652-f51e66d6f0cd-config-data" (OuterVolumeSpecName: "config-data") pod "7ea7d825-2458-4f96-b652-f51e66d6f0cd" (UID: "7ea7d825-2458-4f96-b652-f51e66d6f0cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.045560 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea7d825-2458-4f96-b652-f51e66d6f0cd-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7ea7d825-2458-4f96-b652-f51e66d6f0cd" (UID: "7ea7d825-2458-4f96-b652-f51e66d6f0cd"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.050766 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372d90e2-29ec-4a20-9a36-8073b54ea7b0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "372d90e2-29ec-4a20-9a36-8073b54ea7b0" (UID: "372d90e2-29ec-4a20-9a36-8073b54ea7b0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.051190 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372d90e2-29ec-4a20-9a36-8073b54ea7b0-scripts" (OuterVolumeSpecName: "scripts") pod "372d90e2-29ec-4a20-9a36-8073b54ea7b0" (UID: "372d90e2-29ec-4a20-9a36-8073b54ea7b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.051375 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372d90e2-29ec-4a20-9a36-8073b54ea7b0-kube-api-access-d6b5j" (OuterVolumeSpecName: "kube-api-access-d6b5j") pod "372d90e2-29ec-4a20-9a36-8073b54ea7b0" (UID: "372d90e2-29ec-4a20-9a36-8073b54ea7b0"). InnerVolumeSpecName "kube-api-access-d6b5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.058387 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea7d825-2458-4f96-b652-f51e66d6f0cd-kube-api-access-zbgvd" (OuterVolumeSpecName: "kube-api-access-zbgvd") pod "7ea7d825-2458-4f96-b652-f51e66d6f0cd" (UID: "7ea7d825-2458-4f96-b652-f51e66d6f0cd"). InnerVolumeSpecName "kube-api-access-zbgvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.136482 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7drc\" (UniqueName: \"kubernetes.io/projected/b8117919-6bc1-449e-b3b5-668a44463190-kube-api-access-p7drc\") pod \"b8117919-6bc1-449e-b3b5-668a44463190\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.136546 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-config\") pod \"c0e8dd26-845d-43ff-bc2d-fd6f4808736f\" (UID: \"c0e8dd26-845d-43ff-bc2d-fd6f4808736f\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.136684 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nztc\" (UniqueName: \"kubernetes.io/projected/ae47bfd0-190f-4281-849f-d6892f20036e-kube-api-access-8nztc\") pod \"ae47bfd0-190f-4281-849f-d6892f20036e\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.136741 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae47bfd0-190f-4281-849f-d6892f20036e-logs\") pod \"ae47bfd0-190f-4281-849f-d6892f20036e\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.136823 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-ovsdbserver-sb\") pod \"b8117919-6bc1-449e-b3b5-668a44463190\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.136851 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-config\") pod \"b8117919-6bc1-449e-b3b5-668a44463190\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.136877 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae47bfd0-190f-4281-849f-d6892f20036e-config-data\") pod \"ae47bfd0-190f-4281-849f-d6892f20036e\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.136918 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae47bfd0-190f-4281-849f-d6892f20036e-scripts\") pod \"ae47bfd0-190f-4281-849f-d6892f20036e\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.136955 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-dns-swift-storage-0\") pod \"b8117919-6bc1-449e-b3b5-668a44463190\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.136980 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5vtk\" (UniqueName: \"kubernetes.io/projected/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-kube-api-access-w5vtk\") pod \"c0e8dd26-845d-43ff-bc2d-fd6f4808736f\" (UID: \"c0e8dd26-845d-43ff-bc2d-fd6f4808736f\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.137022 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-ovsdbserver-nb\") pod \"b8117919-6bc1-449e-b3b5-668a44463190\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.137052 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ae47bfd0-190f-4281-849f-d6892f20036e-horizon-secret-key\") pod \"ae47bfd0-190f-4281-849f-d6892f20036e\" (UID: \"ae47bfd0-190f-4281-849f-d6892f20036e\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.137076 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-dns-svc\") pod \"b8117919-6bc1-449e-b3b5-668a44463190\" (UID: \"b8117919-6bc1-449e-b3b5-668a44463190\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.137117 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-combined-ca-bundle\") pod \"c0e8dd26-845d-43ff-bc2d-fd6f4808736f\" (UID: \"c0e8dd26-845d-43ff-bc2d-fd6f4808736f\") " Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.137539 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbgvd\" (UniqueName: \"kubernetes.io/projected/7ea7d825-2458-4f96-b652-f51e66d6f0cd-kube-api-access-zbgvd\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.137564 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/372d90e2-29ec-4a20-9a36-8073b54ea7b0-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.137600 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ea7d825-2458-4f96-b652-f51e66d6f0cd-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.137613 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/372d90e2-29ec-4a20-9a36-8073b54ea7b0-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.137625 4735 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7ea7d825-2458-4f96-b652-f51e66d6f0cd-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.137638 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ea7d825-2458-4f96-b652-f51e66d6f0cd-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.137651 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ea7d825-2458-4f96-b652-f51e66d6f0cd-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.137663 4735 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/372d90e2-29ec-4a20-9a36-8073b54ea7b0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.137676 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/372d90e2-29ec-4a20-9a36-8073b54ea7b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.137688 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6b5j\" (UniqueName: \"kubernetes.io/projected/372d90e2-29ec-4a20-9a36-8073b54ea7b0-kube-api-access-d6b5j\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.138662 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae47bfd0-190f-4281-849f-d6892f20036e-config-data" (OuterVolumeSpecName: "config-data") pod "ae47bfd0-190f-4281-849f-d6892f20036e" (UID: "ae47bfd0-190f-4281-849f-d6892f20036e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.139688 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae47bfd0-190f-4281-849f-d6892f20036e-scripts" (OuterVolumeSpecName: "scripts") pod "ae47bfd0-190f-4281-849f-d6892f20036e" (UID: "ae47bfd0-190f-4281-849f-d6892f20036e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.140980 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae47bfd0-190f-4281-849f-d6892f20036e-logs" (OuterVolumeSpecName: "logs") pod "ae47bfd0-190f-4281-849f-d6892f20036e" (UID: "ae47bfd0-190f-4281-849f-d6892f20036e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.149044 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae47bfd0-190f-4281-849f-d6892f20036e-kube-api-access-8nztc" (OuterVolumeSpecName: "kube-api-access-8nztc") pod "ae47bfd0-190f-4281-849f-d6892f20036e" (UID: "ae47bfd0-190f-4281-849f-d6892f20036e"). InnerVolumeSpecName "kube-api-access-8nztc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.157078 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-kube-api-access-w5vtk" (OuterVolumeSpecName: "kube-api-access-w5vtk") pod "c0e8dd26-845d-43ff-bc2d-fd6f4808736f" (UID: "c0e8dd26-845d-43ff-bc2d-fd6f4808736f"). InnerVolumeSpecName "kube-api-access-w5vtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.157510 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8117919-6bc1-449e-b3b5-668a44463190-kube-api-access-p7drc" (OuterVolumeSpecName: "kube-api-access-p7drc") pod "b8117919-6bc1-449e-b3b5-668a44463190" (UID: "b8117919-6bc1-449e-b3b5-668a44463190"). InnerVolumeSpecName "kube-api-access-p7drc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.160981 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae47bfd0-190f-4281-849f-d6892f20036e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ae47bfd0-190f-4281-849f-d6892f20036e" (UID: "ae47bfd0-190f-4281-849f-d6892f20036e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.194356 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0e8dd26-845d-43ff-bc2d-fd6f4808736f" (UID: "c0e8dd26-845d-43ff-bc2d-fd6f4808736f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.205285 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b8117919-6bc1-449e-b3b5-668a44463190" (UID: "b8117919-6bc1-449e-b3b5-668a44463190"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.207191 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-config" (OuterVolumeSpecName: "config") pod "b8117919-6bc1-449e-b3b5-668a44463190" (UID: "b8117919-6bc1-449e-b3b5-668a44463190"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.208273 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-config" (OuterVolumeSpecName: "config") pod "c0e8dd26-845d-43ff-bc2d-fd6f4808736f" (UID: "c0e8dd26-845d-43ff-bc2d-fd6f4808736f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.209362 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b8117919-6bc1-449e-b3b5-668a44463190" (UID: "b8117919-6bc1-449e-b3b5-668a44463190"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.213806 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b8117919-6bc1-449e-b3b5-668a44463190" (UID: "b8117919-6bc1-449e-b3b5-668a44463190"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.218329 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b8117919-6bc1-449e-b3b5-668a44463190" (UID: "b8117919-6bc1-449e-b3b5-668a44463190"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.239285 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nztc\" (UniqueName: \"kubernetes.io/projected/ae47bfd0-190f-4281-849f-d6892f20036e-kube-api-access-8nztc\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.239336 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae47bfd0-190f-4281-849f-d6892f20036e-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.239348 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.239357 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.239383 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae47bfd0-190f-4281-849f-d6892f20036e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.239394 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae47bfd0-190f-4281-849f-d6892f20036e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.239402 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.239411 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5vtk\" (UniqueName: \"kubernetes.io/projected/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-kube-api-access-w5vtk\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.239419 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.239426 4735 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ae47bfd0-190f-4281-849f-d6892f20036e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.239434 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8117919-6bc1-449e-b3b5-668a44463190-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.239441 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.239468 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7drc\" (UniqueName: \"kubernetes.io/projected/b8117919-6bc1-449e-b3b5-668a44463190-kube-api-access-p7drc\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.239477 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0e8dd26-845d-43ff-bc2d-fd6f4808736f-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.343883 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.361580 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bdcb89768-499m5"] Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.558899 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" podUID="b8117919-6bc1-449e-b3b5-668a44463190" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.559006 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.585012 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b9fc69d49-fmptv" event={"ID":"7ea7d825-2458-4f96-b652-f51e66d6f0cd","Type":"ContainerDied","Data":"7ece4de6735a79f82b0e497d1e79b5155c93b9ec8ec626c663f01f809e812739"} Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.585136 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b9fc69d49-fmptv" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.595050 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5ghbh" event={"ID":"c0e8dd26-845d-43ff-bc2d-fd6f4808736f","Type":"ContainerDied","Data":"d702f4adf90b079ba293e27d3fef3a968bd57daa1963ba58e74bdd88600bcc9b"} Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.595086 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d702f4adf90b079ba293e27d3fef3a968bd57daa1963ba58e74bdd88600bcc9b" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.595090 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5ghbh" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.596419 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-647b785cdf-524zw" event={"ID":"ae47bfd0-190f-4281-849f-d6892f20036e","Type":"ContainerDied","Data":"f38c54975f734a7e7b921239fcc967510cf54c0b38c79dee321c23e32ac471e4"} Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.596473 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-647b785cdf-524zw" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.598615 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce","Type":"ContainerStarted","Data":"4bc7375de96949e93e66a86a1a22fd0454f6eeff3aca4313b315d73a3e84ef25"} Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.600260 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86f45b6977-b8wpp" event={"ID":"372d90e2-29ec-4a20-9a36-8073b54ea7b0","Type":"ContainerDied","Data":"8146e8cfb506d776f4f56c3632da62c866ff2fd0ce791748c0a20ea043fc6e3a"} Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.600262 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86f45b6977-b8wpp" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.601853 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" event={"ID":"b8117919-6bc1-449e-b3b5-668a44463190","Type":"ContainerDied","Data":"b1964c0473d3118073b5013b648fdf4566d7cd494d4214a4ee0013f42ee1c87c"} Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.601888 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-w2t8m" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.601918 4735 scope.go:117] "RemoveContainer" containerID="e125a8a7b38b5cee9070bb72b8256531f85f9205068731bcb8473f3617d1b708" Jan 28 10:01:24 crc kubenswrapper[4735]: E0128 10:01:24.605259 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-khsht" podUID="1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c" Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.697016 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b9fc69d49-fmptv"] Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.713065 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-b9fc69d49-fmptv"] Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.732569 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-86f45b6977-b8wpp"] Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.742998 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-86f45b6977-b8wpp"] Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.763418 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-647b785cdf-524zw"] Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.776468 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-647b785cdf-524zw"] Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.785187 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-w2t8m"] Jan 28 10:01:24 crc kubenswrapper[4735]: I0128 10:01:24.794942 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-w2t8m"] Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.292419 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-bctgx"] Jan 28 10:01:25 crc kubenswrapper[4735]: E0128 10:01:25.292896 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8117919-6bc1-449e-b3b5-668a44463190" containerName="dnsmasq-dns" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.292911 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8117919-6bc1-449e-b3b5-668a44463190" containerName="dnsmasq-dns" Jan 28 10:01:25 crc kubenswrapper[4735]: E0128 10:01:25.292939 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e8dd26-845d-43ff-bc2d-fd6f4808736f" containerName="neutron-db-sync" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.292947 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e8dd26-845d-43ff-bc2d-fd6f4808736f" containerName="neutron-db-sync" Jan 28 10:01:25 crc kubenswrapper[4735]: E0128 10:01:25.292962 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8117919-6bc1-449e-b3b5-668a44463190" containerName="init" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.292969 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8117919-6bc1-449e-b3b5-668a44463190" containerName="init" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.293146 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e8dd26-845d-43ff-bc2d-fd6f4808736f" containerName="neutron-db-sync" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.293171 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8117919-6bc1-449e-b3b5-668a44463190" containerName="dnsmasq-dns" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.294590 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.335831 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-bctgx"] Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.422413 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-78b5d6cfb6-dvwt2"] Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.424400 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.430158 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.430180 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.430418 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-mglps" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.436091 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.436291 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78b5d6cfb6-dvwt2"] Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.462663 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-config\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.462728 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.462779 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dxdb\" (UniqueName: \"kubernetes.io/projected/09a3f6e2-b56d-4a42-9505-fb4b86f60150-kube-api-access-7dxdb\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.462805 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.462859 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.463029 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.514300 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372d90e2-29ec-4a20-9a36-8073b54ea7b0" path="/var/lib/kubelet/pods/372d90e2-29ec-4a20-9a36-8073b54ea7b0/volumes" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.514807 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ea7d825-2458-4f96-b652-f51e66d6f0cd" path="/var/lib/kubelet/pods/7ea7d825-2458-4f96-b652-f51e66d6f0cd/volumes" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.515223 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae47bfd0-190f-4281-849f-d6892f20036e" path="/var/lib/kubelet/pods/ae47bfd0-190f-4281-849f-d6892f20036e/volumes" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.515559 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8117919-6bc1-449e-b3b5-668a44463190" path="/var/lib/kubelet/pods/b8117919-6bc1-449e-b3b5-668a44463190/volumes" Jan 28 10:01:25 crc kubenswrapper[4735]: W0128 10:01:25.562613 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod568b439d_be34_4947_a976_27d1410320f4.slice/crio-df03672497e5ae2f672d25ec81e309f50ab21ec461c57c541f9b2218533e42e5 WatchSource:0}: Error finding container df03672497e5ae2f672d25ec81e309f50ab21ec461c57c541f9b2218533e42e5: Status 404 returned error can't find the container with id df03672497e5ae2f672d25ec81e309f50ab21ec461c57c541f9b2218533e42e5 Jan 28 10:01:25 crc kubenswrapper[4735]: W0128 10:01:25.564870 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod160ae085_44c0_4cef_9854_b0686547b43c.slice/crio-3ca923335ce604c00a3c11a7a63a2f3a3757a0440e8a0e6b68ff721259172bac WatchSource:0}: Error finding container 3ca923335ce604c00a3c11a7a63a2f3a3757a0440e8a0e6b68ff721259172bac: Status 404 returned error can't find the container with id 3ca923335ce604c00a3c11a7a63a2f3a3757a0440e8a0e6b68ff721259172bac Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.565175 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-config\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.565233 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.565263 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dxdb\" (UniqueName: \"kubernetes.io/projected/09a3f6e2-b56d-4a42-9505-fb4b86f60150-kube-api-access-7dxdb\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.565291 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.565319 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-httpd-config\") pod \"neutron-78b5d6cfb6-dvwt2\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.565343 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-combined-ca-bundle\") pod \"neutron-78b5d6cfb6-dvwt2\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.565360 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-ovndb-tls-certs\") pod \"neutron-78b5d6cfb6-dvwt2\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.565391 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjwjc\" (UniqueName: \"kubernetes.io/projected/71a6cc2d-71fb-4872-a42f-967a43beb910-kube-api-access-bjwjc\") pod \"neutron-78b5d6cfb6-dvwt2\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.565409 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.565442 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.565468 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-config\") pod \"neutron-78b5d6cfb6-dvwt2\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.566360 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-config\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.566704 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.567588 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.568192 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.568256 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.586699 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dxdb\" (UniqueName: \"kubernetes.io/projected/09a3f6e2-b56d-4a42-9505-fb4b86f60150-kube-api-access-7dxdb\") pod \"dnsmasq-dns-84b966f6c9-bctgx\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.594668 4735 scope.go:117] "RemoveContainer" containerID="15241c7ef80de965929e4f426b789dda4f040afa4ca756b90fd1d4e5b934b205" Jan 28 10:01:25 crc kubenswrapper[4735]: E0128 10:01:25.600098 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 28 10:01:25 crc kubenswrapper[4735]: E0128 10:01:25.600358 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzljk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-jn5j5_openstack(abbfe5cf-00af-4a6b-921e-401f549a7c8e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 10:01:25 crc kubenswrapper[4735]: E0128 10:01:25.601508 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-jn5j5" podUID="abbfe5cf-00af-4a6b-921e-401f549a7c8e" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.613804 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bdcb89768-499m5" event={"ID":"160ae085-44c0-4cef-9854-b0686547b43c","Type":"ContainerStarted","Data":"3ca923335ce604c00a3c11a7a63a2f3a3757a0440e8a0e6b68ff721259172bac"} Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.615128 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"568b439d-be34-4947-a976-27d1410320f4","Type":"ContainerStarted","Data":"df03672497e5ae2f672d25ec81e309f50ab21ec461c57c541f9b2218533e42e5"} Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.616667 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:25 crc kubenswrapper[4735]: E0128 10:01:25.623930 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-jn5j5" podUID="abbfe5cf-00af-4a6b-921e-401f549a7c8e" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.667039 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-httpd-config\") pod \"neutron-78b5d6cfb6-dvwt2\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.667323 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-combined-ca-bundle\") pod \"neutron-78b5d6cfb6-dvwt2\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.667351 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-ovndb-tls-certs\") pod \"neutron-78b5d6cfb6-dvwt2\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.667412 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjwjc\" (UniqueName: \"kubernetes.io/projected/71a6cc2d-71fb-4872-a42f-967a43beb910-kube-api-access-bjwjc\") pod \"neutron-78b5d6cfb6-dvwt2\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.667492 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-config\") pod \"neutron-78b5d6cfb6-dvwt2\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.679873 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-ovndb-tls-certs\") pod \"neutron-78b5d6cfb6-dvwt2\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.685151 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-config\") pod \"neutron-78b5d6cfb6-dvwt2\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.694631 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-combined-ca-bundle\") pod \"neutron-78b5d6cfb6-dvwt2\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.701008 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-httpd-config\") pod \"neutron-78b5d6cfb6-dvwt2\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.712518 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjwjc\" (UniqueName: \"kubernetes.io/projected/71a6cc2d-71fb-4872-a42f-967a43beb910-kube-api-access-bjwjc\") pod \"neutron-78b5d6cfb6-dvwt2\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:25 crc kubenswrapper[4735]: I0128 10:01:25.766454 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:26 crc kubenswrapper[4735]: I0128 10:01:26.083981 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2vhll"] Jan 28 10:01:26 crc kubenswrapper[4735]: W0128 10:01:26.118431 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7679ac38_8750_48e8_9f60_0c7214eef680.slice/crio-c0dd9b6a3661194d89625c18b99322ab6504c3ed43484118239e4cf7eb5fb349 WatchSource:0}: Error finding container c0dd9b6a3661194d89625c18b99322ab6504c3ed43484118239e4cf7eb5fb349: Status 404 returned error can't find the container with id c0dd9b6a3661194d89625c18b99322ab6504c3ed43484118239e4cf7eb5fb349 Jan 28 10:01:26 crc kubenswrapper[4735]: I0128 10:01:26.165471 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55bcb7d5f8-xkjl6"] Jan 28 10:01:26 crc kubenswrapper[4735]: W0128 10:01:26.201277 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d466bf5_2b29_47a0_b5f9_25e6f6d7bf79.slice/crio-e323a0ecb2fa94f149f5a9a6db9fc0ee887f70825e747b6c683456ae505164cc WatchSource:0}: Error finding container e323a0ecb2fa94f149f5a9a6db9fc0ee887f70825e747b6c683456ae505164cc: Status 404 returned error can't find the container with id e323a0ecb2fa94f149f5a9a6db9fc0ee887f70825e747b6c683456ae505164cc Jan 28 10:01:26 crc kubenswrapper[4735]: I0128 10:01:26.652160 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v5th8" event={"ID":"1891b5eb-afe2-4e76-b3bd-2a141a784a13","Type":"ContainerStarted","Data":"8b4062ed4118495b02a20bbaaf78b310a0147dcb1a89db8ea8b4bdc9bef56afb"} Jan 28 10:01:26 crc kubenswrapper[4735]: I0128 10:01:26.662420 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca","Type":"ContainerStarted","Data":"df5ac638833d768c2a8821cb0ac33ae08bb78d88782b3cb8356a0f5a3f5fa87b"} Jan 28 10:01:26 crc kubenswrapper[4735]: I0128 10:01:26.676541 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2vhll" event={"ID":"7679ac38-8750-48e8-9f60-0c7214eef680","Type":"ContainerStarted","Data":"a23b8db2f1361e0c48bf618e9f5e9d49018d6962c921fd0a24b64339635e1431"} Jan 28 10:01:26 crc kubenswrapper[4735]: I0128 10:01:26.676850 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2vhll" event={"ID":"7679ac38-8750-48e8-9f60-0c7214eef680","Type":"ContainerStarted","Data":"c0dd9b6a3661194d89625c18b99322ab6504c3ed43484118239e4cf7eb5fb349"} Jan 28 10:01:26 crc kubenswrapper[4735]: I0128 10:01:26.679009 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-v5th8" podStartSLOduration=5.32987621 podStartE2EDuration="33.678988771s" podCreationTimestamp="2026-01-28 10:00:53 +0000 UTC" firstStartedPulling="2026-01-28 10:00:55.465991537 +0000 UTC m=+1108.615655910" lastFinishedPulling="2026-01-28 10:01:23.815104098 +0000 UTC m=+1136.964768471" observedRunningTime="2026-01-28 10:01:26.669070562 +0000 UTC m=+1139.818734935" watchObservedRunningTime="2026-01-28 10:01:26.678988771 +0000 UTC m=+1139.828653144" Jan 28 10:01:26 crc kubenswrapper[4735]: I0128 10:01:26.679519 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bdcb89768-499m5" event={"ID":"160ae085-44c0-4cef-9854-b0686547b43c","Type":"ContainerStarted","Data":"dbec602550a61cd364029656e02575fa14e978705c0893330895bcbcbd87ac47"} Jan 28 10:01:26 crc kubenswrapper[4735]: I0128 10:01:26.680650 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55bcb7d5f8-xkjl6" event={"ID":"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79","Type":"ContainerStarted","Data":"e323a0ecb2fa94f149f5a9a6db9fc0ee887f70825e747b6c683456ae505164cc"} Jan 28 10:01:26 crc kubenswrapper[4735]: W0128 10:01:26.697726 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71a6cc2d_71fb_4872_a42f_967a43beb910.slice/crio-4c21c2c7b241230cd00f8ba96b69233586a492cff3fcf85d37935c6c876340e6 WatchSource:0}: Error finding container 4c21c2c7b241230cd00f8ba96b69233586a492cff3fcf85d37935c6c876340e6: Status 404 returned error can't find the container with id 4c21c2c7b241230cd00f8ba96b69233586a492cff3fcf85d37935c6c876340e6 Jan 28 10:01:26 crc kubenswrapper[4735]: I0128 10:01:26.703824 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78b5d6cfb6-dvwt2"] Jan 28 10:01:26 crc kubenswrapper[4735]: I0128 10:01:26.712759 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2vhll" podStartSLOduration=22.712717826 podStartE2EDuration="22.712717826s" podCreationTimestamp="2026-01-28 10:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:01:26.69387899 +0000 UTC m=+1139.843543383" watchObservedRunningTime="2026-01-28 10:01:26.712717826 +0000 UTC m=+1139.862382209" Jan 28 10:01:26 crc kubenswrapper[4735]: I0128 10:01:26.733429 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-bctgx"] Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.719366 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78b5d6cfb6-dvwt2" event={"ID":"71a6cc2d-71fb-4872-a42f-967a43beb910","Type":"ContainerStarted","Data":"fd69ba488e747c3743dd6b999c99cc985490a9f4db64fe0bd212ba5843dfd4a6"} Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.720077 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78b5d6cfb6-dvwt2" event={"ID":"71a6cc2d-71fb-4872-a42f-967a43beb910","Type":"ContainerStarted","Data":"4c21c2c7b241230cd00f8ba96b69233586a492cff3fcf85d37935c6c876340e6"} Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.726912 4735 generic.go:334] "Generic (PLEG): container finished" podID="09a3f6e2-b56d-4a42-9505-fb4b86f60150" containerID="ae1504220671102fab0f309c60dbd98aaed7cbaa0100519fba163b4581b2cad3" exitCode=0 Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.726973 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" event={"ID":"09a3f6e2-b56d-4a42-9505-fb4b86f60150","Type":"ContainerDied","Data":"ae1504220671102fab0f309c60dbd98aaed7cbaa0100519fba163b4581b2cad3"} Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.726998 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" event={"ID":"09a3f6e2-b56d-4a42-9505-fb4b86f60150","Type":"ContainerStarted","Data":"26c338c6ff9e45d4a31b7cbc224faeb4a785656b22016b5e4e1e5cc286068690"} Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.733515 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"568b439d-be34-4947-a976-27d1410320f4","Type":"ContainerStarted","Data":"bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597"} Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.740868 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce","Type":"ContainerStarted","Data":"89419667241bd57e6322ce1dfe454ded37f7b769a951139e2aa0afd5b47f856b"} Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.768904 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bdcb89768-499m5" event={"ID":"160ae085-44c0-4cef-9854-b0686547b43c","Type":"ContainerStarted","Data":"8367909f7f9f94db3f14f8c070c49e1d38bee064d5e8eeb3fa42c652c31be1f9"} Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.776247 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55bcb7d5f8-xkjl6" event={"ID":"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79","Type":"ContainerStarted","Data":"627f0176c1c41c1eb2c0d1dc5ab2749416941b8ddaa1663298b66c07244cf2c7"} Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.776278 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55bcb7d5f8-xkjl6" event={"ID":"9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79","Type":"ContainerStarted","Data":"e3858b35706a43bcfd7d44c91afad8ec250c863a9803e8240d5b325eea97c177"} Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.803906 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-bdcb89768-499m5" podStartSLOduration=25.119721212 podStartE2EDuration="25.803887015s" podCreationTimestamp="2026-01-28 10:01:02 +0000 UTC" firstStartedPulling="2026-01-28 10:01:25.58002175 +0000 UTC m=+1138.729686133" lastFinishedPulling="2026-01-28 10:01:26.264187563 +0000 UTC m=+1139.413851936" observedRunningTime="2026-01-28 10:01:27.803598569 +0000 UTC m=+1140.953262942" watchObservedRunningTime="2026-01-28 10:01:27.803887015 +0000 UTC m=+1140.953551388" Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.852517 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-55bcb7d5f8-xkjl6" podStartSLOduration=25.369147727 podStartE2EDuration="25.85249897s" podCreationTimestamp="2026-01-28 10:01:02 +0000 UTC" firstStartedPulling="2026-01-28 10:01:26.20524277 +0000 UTC m=+1139.354907153" lastFinishedPulling="2026-01-28 10:01:26.688594023 +0000 UTC m=+1139.838258396" observedRunningTime="2026-01-28 10:01:27.848313607 +0000 UTC m=+1140.997977980" watchObservedRunningTime="2026-01-28 10:01:27.85249897 +0000 UTC m=+1141.002163343" Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.884866 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b87b5c6d9-94mvs"] Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.886938 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.891460 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.901428 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.923472 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b87b5c6d9-94mvs"] Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.956423 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-config\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.956471 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-internal-tls-certs\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.956492 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-combined-ca-bundle\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.956520 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-ovndb-tls-certs\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.956563 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhszw\" (UniqueName: \"kubernetes.io/projected/58725e47-7476-44f5-ad7b-ccd74d87aace-kube-api-access-zhszw\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.956722 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-httpd-config\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:27 crc kubenswrapper[4735]: I0128 10:01:27.956779 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-public-tls-certs\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.061955 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-config\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.062215 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-internal-tls-certs\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.062241 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-combined-ca-bundle\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.062264 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-ovndb-tls-certs\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.062309 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhszw\" (UniqueName: \"kubernetes.io/projected/58725e47-7476-44f5-ad7b-ccd74d87aace-kube-api-access-zhszw\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.062343 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-httpd-config\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.062370 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-public-tls-certs\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.066813 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-config\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.067609 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-public-tls-certs\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.067895 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-httpd-config\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.069348 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-internal-tls-certs\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.070003 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-combined-ca-bundle\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.076347 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-ovndb-tls-certs\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.082369 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhszw\" (UniqueName: \"kubernetes.io/projected/58725e47-7476-44f5-ad7b-ccd74d87aace-kube-api-access-zhszw\") pod \"neutron-b87b5c6d9-94mvs\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.234942 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.786334 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" event={"ID":"09a3f6e2-b56d-4a42-9505-fb4b86f60150","Type":"ContainerStarted","Data":"cbc14182a9ecb3a5c555623ef20d59abfad59a42e52a77f0592d54d89dfa0d30"} Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.786658 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.788223 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"568b439d-be34-4947-a976-27d1410320f4","Type":"ContainerStarted","Data":"b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad"} Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.788392 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="568b439d-be34-4947-a976-27d1410320f4" containerName="glance-log" containerID="cri-o://bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597" gracePeriod=30 Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.788664 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="568b439d-be34-4947-a976-27d1410320f4" containerName="glance-httpd" containerID="cri-o://b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad" gracePeriod=30 Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.792980 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce","Type":"ContainerStarted","Data":"0e106a0e91c85b873e390b7e280e4be5b4d267b03903cd80e1d06e8f51373510"} Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.793094 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" containerName="glance-log" containerID="cri-o://89419667241bd57e6322ce1dfe454ded37f7b769a951139e2aa0afd5b47f856b" gracePeriod=30 Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.793230 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" containerName="glance-httpd" containerID="cri-o://0e106a0e91c85b873e390b7e280e4be5b4d267b03903cd80e1d06e8f51373510" gracePeriod=30 Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.801673 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78b5d6cfb6-dvwt2" event={"ID":"71a6cc2d-71fb-4872-a42f-967a43beb910","Type":"ContainerStarted","Data":"14421055e39f0220830ac82638f32ebf9c7f8520cc4ffcdee3c4177c88b3d29a"} Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.802924 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.834033 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" podStartSLOduration=3.833993264 podStartE2EDuration="3.833993264s" podCreationTimestamp="2026-01-28 10:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:01:28.808732276 +0000 UTC m=+1141.958396649" watchObservedRunningTime="2026-01-28 10:01:28.833993264 +0000 UTC m=+1141.983657637" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.845441 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-78b5d6cfb6-dvwt2" podStartSLOduration=3.845417077 podStartE2EDuration="3.845417077s" podCreationTimestamp="2026-01-28 10:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:01:28.831459678 +0000 UTC m=+1141.981124061" watchObservedRunningTime="2026-01-28 10:01:28.845417077 +0000 UTC m=+1141.995081460" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.863228 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=29.86320709 podStartE2EDuration="29.86320709s" podCreationTimestamp="2026-01-28 10:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:01:28.856701096 +0000 UTC m=+1142.006365469" watchObservedRunningTime="2026-01-28 10:01:28.86320709 +0000 UTC m=+1142.012871463" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.887088 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=29.887067727 podStartE2EDuration="29.887067727s" podCreationTimestamp="2026-01-28 10:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:01:28.886592167 +0000 UTC m=+1142.036256560" watchObservedRunningTime="2026-01-28 10:01:28.887067727 +0000 UTC m=+1142.036732100" Jan 28 10:01:28 crc kubenswrapper[4735]: I0128 10:01:28.954691 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b87b5c6d9-94mvs"] Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.723536 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.818571 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b87b5c6d9-94mvs" event={"ID":"58725e47-7476-44f5-ad7b-ccd74d87aace","Type":"ContainerStarted","Data":"84bf958264be2859afd85256b7f45a800d1744fd4d844071ef35aee0af786613"} Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.818619 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b87b5c6d9-94mvs" event={"ID":"58725e47-7476-44f5-ad7b-ccd74d87aace","Type":"ContainerStarted","Data":"0a5c80ad9426a0d0a37d5a944e8198d59694c0ac9c16b1f9ffe0548679eff916"} Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.818633 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b87b5c6d9-94mvs" event={"ID":"58725e47-7476-44f5-ad7b-ccd74d87aace","Type":"ContainerStarted","Data":"6dac877d29445cdc0afee428c767abcbffeccdf8bb1a08eb8a713fa933631116"} Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.819991 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.834176 4735 generic.go:334] "Generic (PLEG): container finished" podID="1891b5eb-afe2-4e76-b3bd-2a141a784a13" containerID="8b4062ed4118495b02a20bbaaf78b310a0147dcb1a89db8ea8b4bdc9bef56afb" exitCode=0 Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.834238 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v5th8" event={"ID":"1891b5eb-afe2-4e76-b3bd-2a141a784a13","Type":"ContainerDied","Data":"8b4062ed4118495b02a20bbaaf78b310a0147dcb1a89db8ea8b4bdc9bef56afb"} Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.847924 4735 generic.go:334] "Generic (PLEG): container finished" podID="568b439d-be34-4947-a976-27d1410320f4" containerID="b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad" exitCode=0 Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.847954 4735 generic.go:334] "Generic (PLEG): container finished" podID="568b439d-be34-4947-a976-27d1410320f4" containerID="bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597" exitCode=143 Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.847992 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"568b439d-be34-4947-a976-27d1410320f4","Type":"ContainerDied","Data":"b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad"} Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.848016 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"568b439d-be34-4947-a976-27d1410320f4","Type":"ContainerDied","Data":"bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597"} Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.848027 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"568b439d-be34-4947-a976-27d1410320f4","Type":"ContainerDied","Data":"df03672497e5ae2f672d25ec81e309f50ab21ec461c57c541f9b2218533e42e5"} Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.848040 4735 scope.go:117] "RemoveContainer" containerID="b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad" Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.848140 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.858117 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b87b5c6d9-94mvs" podStartSLOduration=2.85808907 podStartE2EDuration="2.85808907s" podCreationTimestamp="2026-01-28 10:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:01:29.834938219 +0000 UTC m=+1142.984602592" watchObservedRunningTime="2026-01-28 10:01:29.85808907 +0000 UTC m=+1143.007753453" Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.874505 4735 generic.go:334] "Generic (PLEG): container finished" podID="97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" containerID="0e106a0e91c85b873e390b7e280e4be5b4d267b03903cd80e1d06e8f51373510" exitCode=0 Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.874542 4735 generic.go:334] "Generic (PLEG): container finished" podID="97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" containerID="89419667241bd57e6322ce1dfe454ded37f7b769a951139e2aa0afd5b47f856b" exitCode=143 Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.874697 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce","Type":"ContainerDied","Data":"0e106a0e91c85b873e390b7e280e4be5b4d267b03903cd80e1d06e8f51373510"} Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.874801 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce","Type":"ContainerDied","Data":"89419667241bd57e6322ce1dfe454ded37f7b769a951139e2aa0afd5b47f856b"} Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.900132 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-scripts\") pod \"568b439d-be34-4947-a976-27d1410320f4\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.900236 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-config-data\") pod \"568b439d-be34-4947-a976-27d1410320f4\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.900316 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-combined-ca-bundle\") pod \"568b439d-be34-4947-a976-27d1410320f4\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.900368 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"568b439d-be34-4947-a976-27d1410320f4\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.900437 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/568b439d-be34-4947-a976-27d1410320f4-logs\") pod \"568b439d-be34-4947-a976-27d1410320f4\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.900484 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjnfh\" (UniqueName: \"kubernetes.io/projected/568b439d-be34-4947-a976-27d1410320f4-kube-api-access-sjnfh\") pod \"568b439d-be34-4947-a976-27d1410320f4\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.900546 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-public-tls-certs\") pod \"568b439d-be34-4947-a976-27d1410320f4\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.900580 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/568b439d-be34-4947-a976-27d1410320f4-httpd-run\") pod \"568b439d-be34-4947-a976-27d1410320f4\" (UID: \"568b439d-be34-4947-a976-27d1410320f4\") " Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.902204 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568b439d-be34-4947-a976-27d1410320f4-logs" (OuterVolumeSpecName: "logs") pod "568b439d-be34-4947-a976-27d1410320f4" (UID: "568b439d-be34-4947-a976-27d1410320f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.902792 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568b439d-be34-4947-a976-27d1410320f4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "568b439d-be34-4947-a976-27d1410320f4" (UID: "568b439d-be34-4947-a976-27d1410320f4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.907024 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "568b439d-be34-4947-a976-27d1410320f4" (UID: "568b439d-be34-4947-a976-27d1410320f4"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.909971 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568b439d-be34-4947-a976-27d1410320f4-kube-api-access-sjnfh" (OuterVolumeSpecName: "kube-api-access-sjnfh") pod "568b439d-be34-4947-a976-27d1410320f4" (UID: "568b439d-be34-4947-a976-27d1410320f4"). InnerVolumeSpecName "kube-api-access-sjnfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.912395 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-scripts" (OuterVolumeSpecName: "scripts") pod "568b439d-be34-4947-a976-27d1410320f4" (UID: "568b439d-be34-4947-a976-27d1410320f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.927766 4735 scope.go:117] "RemoveContainer" containerID="bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597" Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.930017 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "568b439d-be34-4947-a976-27d1410320f4" (UID: "568b439d-be34-4947-a976-27d1410320f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.953919 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-config-data" (OuterVolumeSpecName: "config-data") pod "568b439d-be34-4947-a976-27d1410320f4" (UID: "568b439d-be34-4947-a976-27d1410320f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.988352 4735 scope.go:117] "RemoveContainer" containerID="b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad" Jan 28 10:01:29 crc kubenswrapper[4735]: E0128 10:01:29.992898 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad\": container with ID starting with b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad not found: ID does not exist" containerID="b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad" Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.992955 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad"} err="failed to get container status \"b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad\": rpc error: code = NotFound desc = could not find container \"b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad\": container with ID starting with b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad not found: ID does not exist" Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.992984 4735 scope.go:117] "RemoveContainer" containerID="bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597" Jan 28 10:01:29 crc kubenswrapper[4735]: E0128 10:01:29.999910 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597\": container with ID starting with bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597 not found: ID does not exist" containerID="bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597" Jan 28 10:01:29 crc kubenswrapper[4735]: I0128 10:01:29.999961 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597"} err="failed to get container status \"bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597\": rpc error: code = NotFound desc = could not find container \"bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597\": container with ID starting with bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597 not found: ID does not exist" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:29.999987 4735 scope.go:117] "RemoveContainer" containerID="b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.003065 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad"} err="failed to get container status \"b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad\": rpc error: code = NotFound desc = could not find container \"b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad\": container with ID starting with b19205f6bba4006363894ebd77432367f8e01f1d7e1bd85c38d6906a7564fcad not found: ID does not exist" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.003121 4735 scope.go:117] "RemoveContainer" containerID="bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.006231 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjnfh\" (UniqueName: \"kubernetes.io/projected/568b439d-be34-4947-a976-27d1410320f4-kube-api-access-sjnfh\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.006260 4735 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/568b439d-be34-4947-a976-27d1410320f4-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.006270 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.006279 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.006289 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.006309 4735 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.006318 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/568b439d-be34-4947-a976-27d1410320f4-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.013200 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597"} err="failed to get container status \"bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597\": rpc error: code = NotFound desc = could not find container \"bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597\": container with ID starting with bc867e2e6fdd6ec8f37f16db85d95a6087a284953a3290e9d4a63dddc2f50597 not found: ID does not exist" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.019691 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.019759 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.035407 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "568b439d-be34-4947-a976-27d1410320f4" (UID: "568b439d-be34-4947-a976-27d1410320f4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.054260 4735 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.056015 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.108458 4735 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.108736 4735 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/568b439d-be34-4947-a976-27d1410320f4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.185146 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.217066 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-combined-ca-bundle\") pod \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.217142 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glhdl\" (UniqueName: \"kubernetes.io/projected/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-kube-api-access-glhdl\") pod \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.217184 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-httpd-run\") pod \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.217278 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-scripts\") pod \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.217317 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-internal-tls-certs\") pod \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.217346 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-config-data\") pod \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.217399 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-logs\") pod \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.217437 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\" (UID: \"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce\") " Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.230621 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.232025 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" (UID: "97c715ff-b09a-4df2-bd6a-38e2f7a9cfce"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.234729 4735 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.236139 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-logs" (OuterVolumeSpecName: "logs") pod "97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" (UID: "97c715ff-b09a-4df2-bd6a-38e2f7a9cfce"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.268115 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-kube-api-access-glhdl" (OuterVolumeSpecName: "kube-api-access-glhdl") pod "97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" (UID: "97c715ff-b09a-4df2-bd6a-38e2f7a9cfce"). InnerVolumeSpecName "kube-api-access-glhdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.268230 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:01:30 crc kubenswrapper[4735]: E0128 10:01:30.270013 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" containerName="glance-log" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.270065 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" containerName="glance-log" Jan 28 10:01:30 crc kubenswrapper[4735]: E0128 10:01:30.270137 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568b439d-be34-4947-a976-27d1410320f4" containerName="glance-httpd" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.270161 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="568b439d-be34-4947-a976-27d1410320f4" containerName="glance-httpd" Jan 28 10:01:30 crc kubenswrapper[4735]: E0128 10:01:30.270212 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568b439d-be34-4947-a976-27d1410320f4" containerName="glance-log" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.270223 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="568b439d-be34-4947-a976-27d1410320f4" containerName="glance-log" Jan 28 10:01:30 crc kubenswrapper[4735]: E0128 10:01:30.270244 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" containerName="glance-httpd" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.270252 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" containerName="glance-httpd" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.270690 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" containerName="glance-log" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.270713 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="568b439d-be34-4947-a976-27d1410320f4" containerName="glance-httpd" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.270734 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="568b439d-be34-4947-a976-27d1410320f4" containerName="glance-log" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.270766 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" containerName="glance-httpd" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.273258 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.280066 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-scripts" (OuterVolumeSpecName: "scripts") pod "97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" (UID: "97c715ff-b09a-4df2-bd6a-38e2f7a9cfce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.310202 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.310636 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.313286 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.317981 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" (UID: "97c715ff-b09a-4df2-bd6a-38e2f7a9cfce"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.319850 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" (UID: "97c715ff-b09a-4df2-bd6a-38e2f7a9cfce"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.342934 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glhdl\" (UniqueName: \"kubernetes.io/projected/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-kube-api-access-glhdl\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.342966 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.342975 4735 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.342984 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.343006 4735 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.346378 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-config-data" (OuterVolumeSpecName: "config-data") pod "97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" (UID: "97c715ff-b09a-4df2-bd6a-38e2f7a9cfce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.375354 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" (UID: "97c715ff-b09a-4df2-bd6a-38e2f7a9cfce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.399941 4735 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.443964 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cptm8\" (UniqueName: \"kubernetes.io/projected/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-kube-api-access-cptm8\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.444026 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.444076 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.444093 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.444139 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-logs\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.444160 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-config-data\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.444179 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-scripts\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.444205 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.444259 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.444271 4735 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.444280 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.546613 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.546706 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cptm8\" (UniqueName: \"kubernetes.io/projected/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-kube-api-access-cptm8\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.546757 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.546780 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.546797 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.546834 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-logs\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.546854 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-config-data\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.546872 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-scripts\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.548651 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-logs\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.548703 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.548998 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.566270 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-scripts\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.571997 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.573297 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-config-data\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.580486 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.607839 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cptm8\" (UniqueName: \"kubernetes.io/projected/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-kube-api-access-cptm8\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.628269 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.821055 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.896583 4735 generic.go:334] "Generic (PLEG): container finished" podID="7679ac38-8750-48e8-9f60-0c7214eef680" containerID="a23b8db2f1361e0c48bf618e9f5e9d49018d6962c921fd0a24b64339635e1431" exitCode=0 Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.896646 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2vhll" event={"ID":"7679ac38-8750-48e8-9f60-0c7214eef680","Type":"ContainerDied","Data":"a23b8db2f1361e0c48bf618e9f5e9d49018d6962c921fd0a24b64339635e1431"} Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.921696 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.924948 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97c715ff-b09a-4df2-bd6a-38e2f7a9cfce","Type":"ContainerDied","Data":"4bc7375de96949e93e66a86a1a22fd0454f6eeff3aca4313b315d73a3e84ef25"} Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.925007 4735 scope.go:117] "RemoveContainer" containerID="0e106a0e91c85b873e390b7e280e4be5b4d267b03903cd80e1d06e8f51373510" Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.966003 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:01:30 crc kubenswrapper[4735]: I0128 10:01:30.997539 4735 scope.go:117] "RemoveContainer" containerID="89419667241bd57e6322ce1dfe454ded37f7b769a951139e2aa0afd5b47f856b" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.003386 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.028988 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.030455 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.036430 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.036625 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.049470 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.157151 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.157216 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76060949-a1d9-4101-9d17-adc2e359004b-logs\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.157258 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ls6g\" (UniqueName: \"kubernetes.io/projected/76060949-a1d9-4101-9d17-adc2e359004b-kube-api-access-6ls6g\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.157330 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.157366 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76060949-a1d9-4101-9d17-adc2e359004b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.157405 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.157438 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.157466 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.259132 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.259199 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.259246 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.259303 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.259335 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76060949-a1d9-4101-9d17-adc2e359004b-logs\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.259375 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ls6g\" (UniqueName: \"kubernetes.io/projected/76060949-a1d9-4101-9d17-adc2e359004b-kube-api-access-6ls6g\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.259424 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.259478 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76060949-a1d9-4101-9d17-adc2e359004b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.259991 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76060949-a1d9-4101-9d17-adc2e359004b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.260736 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76060949-a1d9-4101-9d17-adc2e359004b-logs\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.261240 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.265997 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.283671 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.284357 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.291155 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.295839 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ls6g\" (UniqueName: \"kubernetes.io/projected/76060949-a1d9-4101-9d17-adc2e359004b-kube-api-access-6ls6g\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.301895 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.405318 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.454397 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.463785 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v5th8" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.547632 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="568b439d-be34-4947-a976-27d1410320f4" path="/var/lib/kubelet/pods/568b439d-be34-4947-a976-27d1410320f4/volumes" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.551323 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97c715ff-b09a-4df2-bd6a-38e2f7a9cfce" path="/var/lib/kubelet/pods/97c715ff-b09a-4df2-bd6a-38e2f7a9cfce/volumes" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.565715 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-combined-ca-bundle\") pod \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.565764 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1891b5eb-afe2-4e76-b3bd-2a141a784a13-logs\") pod \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.565802 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-config-data\") pod \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.565829 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-scripts\") pod \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.565851 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9msvz\" (UniqueName: \"kubernetes.io/projected/1891b5eb-afe2-4e76-b3bd-2a141a784a13-kube-api-access-9msvz\") pod \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\" (UID: \"1891b5eb-afe2-4e76-b3bd-2a141a784a13\") " Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.566158 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1891b5eb-afe2-4e76-b3bd-2a141a784a13-logs" (OuterVolumeSpecName: "logs") pod "1891b5eb-afe2-4e76-b3bd-2a141a784a13" (UID: "1891b5eb-afe2-4e76-b3bd-2a141a784a13"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.566532 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1891b5eb-afe2-4e76-b3bd-2a141a784a13-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.575889 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-scripts" (OuterVolumeSpecName: "scripts") pod "1891b5eb-afe2-4e76-b3bd-2a141a784a13" (UID: "1891b5eb-afe2-4e76-b3bd-2a141a784a13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.582011 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1891b5eb-afe2-4e76-b3bd-2a141a784a13-kube-api-access-9msvz" (OuterVolumeSpecName: "kube-api-access-9msvz") pod "1891b5eb-afe2-4e76-b3bd-2a141a784a13" (UID: "1891b5eb-afe2-4e76-b3bd-2a141a784a13"). InnerVolumeSpecName "kube-api-access-9msvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.611007 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-config-data" (OuterVolumeSpecName: "config-data") pod "1891b5eb-afe2-4e76-b3bd-2a141a784a13" (UID: "1891b5eb-afe2-4e76-b3bd-2a141a784a13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.620945 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1891b5eb-afe2-4e76-b3bd-2a141a784a13" (UID: "1891b5eb-afe2-4e76-b3bd-2a141a784a13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.668615 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.668644 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.668654 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1891b5eb-afe2-4e76-b3bd-2a141a784a13-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.668663 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9msvz\" (UniqueName: \"kubernetes.io/projected/1891b5eb-afe2-4e76-b3bd-2a141a784a13-kube-api-access-9msvz\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.933052 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc","Type":"ContainerStarted","Data":"8bed4ee2559b9ae046d792f0bf2f2a4b8d69aa2f9e66a4e4edc2d377810d94e9"} Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.936123 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v5th8" event={"ID":"1891b5eb-afe2-4e76-b3bd-2a141a784a13","Type":"ContainerDied","Data":"28d588a3be18a35d482c5602b15182d6e91d45058afe6957f5db9b2af7932ec3"} Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.936154 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28d588a3be18a35d482c5602b15182d6e91d45058afe6957f5db9b2af7932ec3" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.936184 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v5th8" Jan 28 10:01:31 crc kubenswrapper[4735]: I0128 10:01:31.981568 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.087177 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7bbdbbfb48-tj5mr"] Jan 28 10:01:32 crc kubenswrapper[4735]: E0128 10:01:32.087597 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1891b5eb-afe2-4e76-b3bd-2a141a784a13" containerName="placement-db-sync" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.087614 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1891b5eb-afe2-4e76-b3bd-2a141a784a13" containerName="placement-db-sync" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.087829 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="1891b5eb-afe2-4e76-b3bd-2a141a784a13" containerName="placement-db-sync" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.088685 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.091466 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.091845 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.092063 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.092315 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.092472 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-szmvb" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.145554 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7bbdbbfb48-tj5mr"] Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.177091 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c7392e-2cdc-4e9a-9a25-9f867a675d48-public-tls-certs\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.177229 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c7392e-2cdc-4e9a-9a25-9f867a675d48-combined-ca-bundle\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.177303 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23c7392e-2cdc-4e9a-9a25-9f867a675d48-scripts\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.177460 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23c7392e-2cdc-4e9a-9a25-9f867a675d48-logs\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.177550 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c7392e-2cdc-4e9a-9a25-9f867a675d48-internal-tls-certs\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.177619 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23c7392e-2cdc-4e9a-9a25-9f867a675d48-config-data\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.177660 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bnlf\" (UniqueName: \"kubernetes.io/projected/23c7392e-2cdc-4e9a-9a25-9f867a675d48-kube-api-access-5bnlf\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.279170 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c7392e-2cdc-4e9a-9a25-9f867a675d48-internal-tls-certs\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.279442 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23c7392e-2cdc-4e9a-9a25-9f867a675d48-config-data\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.279488 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bnlf\" (UniqueName: \"kubernetes.io/projected/23c7392e-2cdc-4e9a-9a25-9f867a675d48-kube-api-access-5bnlf\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.279512 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c7392e-2cdc-4e9a-9a25-9f867a675d48-public-tls-certs\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.279569 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c7392e-2cdc-4e9a-9a25-9f867a675d48-combined-ca-bundle\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.279592 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23c7392e-2cdc-4e9a-9a25-9f867a675d48-scripts\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.279642 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23c7392e-2cdc-4e9a-9a25-9f867a675d48-logs\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.280070 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23c7392e-2cdc-4e9a-9a25-9f867a675d48-logs\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.285342 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23c7392e-2cdc-4e9a-9a25-9f867a675d48-scripts\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.285877 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c7392e-2cdc-4e9a-9a25-9f867a675d48-public-tls-certs\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.286112 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23c7392e-2cdc-4e9a-9a25-9f867a675d48-config-data\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.286113 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c7392e-2cdc-4e9a-9a25-9f867a675d48-combined-ca-bundle\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.297388 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bnlf\" (UniqueName: \"kubernetes.io/projected/23c7392e-2cdc-4e9a-9a25-9f867a675d48-kube-api-access-5bnlf\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.299033 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c7392e-2cdc-4e9a-9a25-9f867a675d48-internal-tls-certs\") pod \"placement-7bbdbbfb48-tj5mr\" (UID: \"23c7392e-2cdc-4e9a-9a25-9f867a675d48\") " pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.396134 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.396206 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.433974 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.509554 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.509613 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:32 crc kubenswrapper[4735]: I0128 10:01:32.960164 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc","Type":"ContainerStarted","Data":"0031570d04af9d6dc91a60348887e2de0a3465aebf61536eac76760caf0fef91"} Jan 28 10:01:35 crc kubenswrapper[4735]: I0128 10:01:35.430374 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:01:35 crc kubenswrapper[4735]: I0128 10:01:35.431134 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:01:35 crc kubenswrapper[4735]: I0128 10:01:35.618983 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:01:35 crc kubenswrapper[4735]: I0128 10:01:35.700159 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-vd94t"] Jan 28 10:01:35 crc kubenswrapper[4735]: I0128 10:01:35.702326 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" podUID="9260c92d-b507-4f85-8b91-6202f3c6e393" containerName="dnsmasq-dns" containerID="cri-o://dcc3d7b52e76dea3c5125ffc0fa92281a13ac24b5aeb0ee0332fdbbabacea92a" gracePeriod=10 Jan 28 10:01:35 crc kubenswrapper[4735]: I0128 10:01:35.986707 4735 generic.go:334] "Generic (PLEG): container finished" podID="9260c92d-b507-4f85-8b91-6202f3c6e393" containerID="dcc3d7b52e76dea3c5125ffc0fa92281a13ac24b5aeb0ee0332fdbbabacea92a" exitCode=0 Jan 28 10:01:35 crc kubenswrapper[4735]: I0128 10:01:35.986828 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" event={"ID":"9260c92d-b507-4f85-8b91-6202f3c6e393","Type":"ContainerDied","Data":"dcc3d7b52e76dea3c5125ffc0fa92281a13ac24b5aeb0ee0332fdbbabacea92a"} Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.049706 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2vhll" event={"ID":"7679ac38-8750-48e8-9f60-0c7214eef680","Type":"ContainerDied","Data":"c0dd9b6a3661194d89625c18b99322ab6504c3ed43484118239e4cf7eb5fb349"} Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.050023 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0dd9b6a3661194d89625c18b99322ab6504c3ed43484118239e4cf7eb5fb349" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.073941 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76060949-a1d9-4101-9d17-adc2e359004b","Type":"ContainerStarted","Data":"76296177fc0cc0dacc10f1ac6e9bd4e8fae8a74a33f4da2c77e0a43210c78ea9"} Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.109563 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.182273 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qwl\" (UniqueName: \"kubernetes.io/projected/7679ac38-8750-48e8-9f60-0c7214eef680-kube-api-access-d6qwl\") pod \"7679ac38-8750-48e8-9f60-0c7214eef680\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.182561 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-combined-ca-bundle\") pod \"7679ac38-8750-48e8-9f60-0c7214eef680\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.182636 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-credential-keys\") pod \"7679ac38-8750-48e8-9f60-0c7214eef680\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.182732 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-fernet-keys\") pod \"7679ac38-8750-48e8-9f60-0c7214eef680\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.182832 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-config-data\") pod \"7679ac38-8750-48e8-9f60-0c7214eef680\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.182890 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-scripts\") pod \"7679ac38-8750-48e8-9f60-0c7214eef680\" (UID: \"7679ac38-8750-48e8-9f60-0c7214eef680\") " Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.194620 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7679ac38-8750-48e8-9f60-0c7214eef680" (UID: "7679ac38-8750-48e8-9f60-0c7214eef680"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.195190 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7679ac38-8750-48e8-9f60-0c7214eef680" (UID: "7679ac38-8750-48e8-9f60-0c7214eef680"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.195307 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-scripts" (OuterVolumeSpecName: "scripts") pod "7679ac38-8750-48e8-9f60-0c7214eef680" (UID: "7679ac38-8750-48e8-9f60-0c7214eef680"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.199056 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7679ac38-8750-48e8-9f60-0c7214eef680-kube-api-access-d6qwl" (OuterVolumeSpecName: "kube-api-access-d6qwl") pod "7679ac38-8750-48e8-9f60-0c7214eef680" (UID: "7679ac38-8750-48e8-9f60-0c7214eef680"). InnerVolumeSpecName "kube-api-access-d6qwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.230894 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7679ac38-8750-48e8-9f60-0c7214eef680" (UID: "7679ac38-8750-48e8-9f60-0c7214eef680"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.252062 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-config-data" (OuterVolumeSpecName: "config-data") pod "7679ac38-8750-48e8-9f60-0c7214eef680" (UID: "7679ac38-8750-48e8-9f60-0c7214eef680"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.283147 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.285606 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.285636 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.285646 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qwl\" (UniqueName: \"kubernetes.io/projected/7679ac38-8750-48e8-9f60-0c7214eef680-kube-api-access-d6qwl\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.285656 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.285667 4735 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.285674 4735 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7679ac38-8750-48e8-9f60-0c7214eef680-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.386901 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-dns-swift-storage-0\") pod \"9260c92d-b507-4f85-8b91-6202f3c6e393\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.387230 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4jg6\" (UniqueName: \"kubernetes.io/projected/9260c92d-b507-4f85-8b91-6202f3c6e393-kube-api-access-v4jg6\") pod \"9260c92d-b507-4f85-8b91-6202f3c6e393\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.387345 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-ovsdbserver-nb\") pod \"9260c92d-b507-4f85-8b91-6202f3c6e393\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.387477 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-dns-svc\") pod \"9260c92d-b507-4f85-8b91-6202f3c6e393\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.387640 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-ovsdbserver-sb\") pod \"9260c92d-b507-4f85-8b91-6202f3c6e393\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.387715 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-config\") pod \"9260c92d-b507-4f85-8b91-6202f3c6e393\" (UID: \"9260c92d-b507-4f85-8b91-6202f3c6e393\") " Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.396945 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9260c92d-b507-4f85-8b91-6202f3c6e393-kube-api-access-v4jg6" (OuterVolumeSpecName: "kube-api-access-v4jg6") pod "9260c92d-b507-4f85-8b91-6202f3c6e393" (UID: "9260c92d-b507-4f85-8b91-6202f3c6e393"). InnerVolumeSpecName "kube-api-access-v4jg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.448222 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9260c92d-b507-4f85-8b91-6202f3c6e393" (UID: "9260c92d-b507-4f85-8b91-6202f3c6e393"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.469694 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9260c92d-b507-4f85-8b91-6202f3c6e393" (UID: "9260c92d-b507-4f85-8b91-6202f3c6e393"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.477586 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7bbdbbfb48-tj5mr"] Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.491812 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4jg6\" (UniqueName: \"kubernetes.io/projected/9260c92d-b507-4f85-8b91-6202f3c6e393-kube-api-access-v4jg6\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.491835 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.491844 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.499482 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-config" (OuterVolumeSpecName: "config") pod "9260c92d-b507-4f85-8b91-6202f3c6e393" (UID: "9260c92d-b507-4f85-8b91-6202f3c6e393"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.504358 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9260c92d-b507-4f85-8b91-6202f3c6e393" (UID: "9260c92d-b507-4f85-8b91-6202f3c6e393"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.526673 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9260c92d-b507-4f85-8b91-6202f3c6e393" (UID: "9260c92d-b507-4f85-8b91-6202f3c6e393"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.598927 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.599828 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:37 crc kubenswrapper[4735]: I0128 10:01:37.599841 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9260c92d-b507-4f85-8b91-6202f3c6e393-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.107770 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76060949-a1d9-4101-9d17-adc2e359004b","Type":"ContainerStarted","Data":"3cea8e7785c4c8ea37002880b27c8a540b10ef30fd88cad4ad6e91e835422542"} Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.109266 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc","Type":"ContainerStarted","Data":"3a8c84148521a148189f8b6770c920d62c1e943632aa62f730ef618dff1acc36"} Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.114889 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca","Type":"ContainerStarted","Data":"ab418fb85d7785dca6ec8936aba30fb114f6bdfe44bc49ca636f66a95caa6551"} Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.122337 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" event={"ID":"9260c92d-b507-4f85-8b91-6202f3c6e393","Type":"ContainerDied","Data":"43553ec3de921ed3b9b571f768d372dfaf8be08dd19553c3df4c833e18d84608"} Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.122388 4735 scope.go:117] "RemoveContainer" containerID="dcc3d7b52e76dea3c5125ffc0fa92281a13ac24b5aeb0ee0332fdbbabacea92a" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.122505 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-vd94t" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.139447 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2vhll" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.140150 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7bbdbbfb48-tj5mr" event={"ID":"23c7392e-2cdc-4e9a-9a25-9f867a675d48","Type":"ContainerStarted","Data":"ffa4899ab5da9eed0a3075676e7d5c000e2a11e542227822b40847735eda4948"} Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.140179 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7bbdbbfb48-tj5mr" event={"ID":"23c7392e-2cdc-4e9a-9a25-9f867a675d48","Type":"ContainerStarted","Data":"8755ece8009416e0b707e0df43417190c140eccabe70416e0cad5aa347c53de9"} Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.153286 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.153266735 podStartE2EDuration="8.153266735s" podCreationTimestamp="2026-01-28 10:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:01:38.149899911 +0000 UTC m=+1151.299564284" watchObservedRunningTime="2026-01-28 10:01:38.153266735 +0000 UTC m=+1151.302931108" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.189336 4735 scope.go:117] "RemoveContainer" containerID="4b8c158fcc25457a26c59cb35aa99fc74ffdbef474a5acc50e8529a5b4ba25dd" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.211859 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-vd94t"] Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.228322 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-vd94t"] Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.245881 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-69fccdbfc9-k2hlg"] Jan 28 10:01:38 crc kubenswrapper[4735]: E0128 10:01:38.246320 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9260c92d-b507-4f85-8b91-6202f3c6e393" containerName="dnsmasq-dns" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.246338 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9260c92d-b507-4f85-8b91-6202f3c6e393" containerName="dnsmasq-dns" Jan 28 10:01:38 crc kubenswrapper[4735]: E0128 10:01:38.246363 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9260c92d-b507-4f85-8b91-6202f3c6e393" containerName="init" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.246370 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="9260c92d-b507-4f85-8b91-6202f3c6e393" containerName="init" Jan 28 10:01:38 crc kubenswrapper[4735]: E0128 10:01:38.246379 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7679ac38-8750-48e8-9f60-0c7214eef680" containerName="keystone-bootstrap" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.246389 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="7679ac38-8750-48e8-9f60-0c7214eef680" containerName="keystone-bootstrap" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.246595 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="7679ac38-8750-48e8-9f60-0c7214eef680" containerName="keystone-bootstrap" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.246626 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="9260c92d-b507-4f85-8b91-6202f3c6e393" containerName="dnsmasq-dns" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.251542 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.272997 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.276930 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.278032 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.285842 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pnjpl" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.287150 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.287395 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.321382 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl22p\" (UniqueName: \"kubernetes.io/projected/fe8fc9bd-9646-437c-818b-d34f1b751e19-kube-api-access-sl22p\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.321464 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-config-data\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.321505 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-scripts\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.321580 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-credential-keys\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.321627 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-public-tls-certs\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.321672 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-combined-ca-bundle\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.321735 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-fernet-keys\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.321774 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-internal-tls-certs\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.357979 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69fccdbfc9-k2hlg"] Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.430125 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-credential-keys\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.430206 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-public-tls-certs\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.430251 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-combined-ca-bundle\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.430277 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-fernet-keys\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.430301 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-internal-tls-certs\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.430362 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl22p\" (UniqueName: \"kubernetes.io/projected/fe8fc9bd-9646-437c-818b-d34f1b751e19-kube-api-access-sl22p\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.430387 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-config-data\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.430412 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-scripts\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.437668 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-config-data\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.439537 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-combined-ca-bundle\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.446547 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-internal-tls-certs\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.449106 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-scripts\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.449228 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-credential-keys\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.449493 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-fernet-keys\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.457858 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe8fc9bd-9646-437c-818b-d34f1b751e19-public-tls-certs\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.460446 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl22p\" (UniqueName: \"kubernetes.io/projected/fe8fc9bd-9646-437c-818b-d34f1b751e19-kube-api-access-sl22p\") pod \"keystone-69fccdbfc9-k2hlg\" (UID: \"fe8fc9bd-9646-437c-818b-d34f1b751e19\") " pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:38 crc kubenswrapper[4735]: I0128 10:01:38.615454 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:39 crc kubenswrapper[4735]: I0128 10:01:39.146042 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69fccdbfc9-k2hlg"] Jan 28 10:01:39 crc kubenswrapper[4735]: I0128 10:01:39.152984 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7bbdbbfb48-tj5mr" event={"ID":"23c7392e-2cdc-4e9a-9a25-9f867a675d48","Type":"ContainerStarted","Data":"491b7fb63f5556bca37010912081bf35f4265dd6eb54d02cacdaef3705a8d79f"} Jan 28 10:01:39 crc kubenswrapper[4735]: I0128 10:01:39.154735 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:39 crc kubenswrapper[4735]: I0128 10:01:39.154798 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:01:39 crc kubenswrapper[4735]: I0128 10:01:39.159727 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76060949-a1d9-4101-9d17-adc2e359004b","Type":"ContainerStarted","Data":"e9312f4c6d7769097e542cc0cf85ffe9767e0e4a021f06a13ac0567dab784b1a"} Jan 28 10:01:39 crc kubenswrapper[4735]: I0128 10:01:39.182241 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7bbdbbfb48-tj5mr" podStartSLOduration=7.182221159 podStartE2EDuration="7.182221159s" podCreationTimestamp="2026-01-28 10:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:01:39.177358911 +0000 UTC m=+1152.327023284" watchObservedRunningTime="2026-01-28 10:01:39.182221159 +0000 UTC m=+1152.331885532" Jan 28 10:01:39 crc kubenswrapper[4735]: W0128 10:01:39.205308 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe8fc9bd_9646_437c_818b_d34f1b751e19.slice/crio-b7e5584bbed3caca98c1a268712bf68c2a5cb46aed71dfb80254769513d113f6 WatchSource:0}: Error finding container b7e5584bbed3caca98c1a268712bf68c2a5cb46aed71dfb80254769513d113f6: Status 404 returned error can't find the container with id b7e5584bbed3caca98c1a268712bf68c2a5cb46aed71dfb80254769513d113f6 Jan 28 10:01:39 crc kubenswrapper[4735]: I0128 10:01:39.221612 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.221588169 podStartE2EDuration="9.221588169s" podCreationTimestamp="2026-01-28 10:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:01:39.212083609 +0000 UTC m=+1152.361747982" watchObservedRunningTime="2026-01-28 10:01:39.221588169 +0000 UTC m=+1152.371252532" Jan 28 10:01:39 crc kubenswrapper[4735]: I0128 10:01:39.521817 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9260c92d-b507-4f85-8b91-6202f3c6e393" path="/var/lib/kubelet/pods/9260c92d-b507-4f85-8b91-6202f3c6e393/volumes" Jan 28 10:01:40 crc kubenswrapper[4735]: I0128 10:01:40.169671 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69fccdbfc9-k2hlg" event={"ID":"fe8fc9bd-9646-437c-818b-d34f1b751e19","Type":"ContainerStarted","Data":"2203bf3e2feb516c04e08c3fcdab60f25eefaa53b38c32befe16f31389f10c82"} Jan 28 10:01:40 crc kubenswrapper[4735]: I0128 10:01:40.169731 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:01:40 crc kubenswrapper[4735]: I0128 10:01:40.169766 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69fccdbfc9-k2hlg" event={"ID":"fe8fc9bd-9646-437c-818b-d34f1b751e19","Type":"ContainerStarted","Data":"b7e5584bbed3caca98c1a268712bf68c2a5cb46aed71dfb80254769513d113f6"} Jan 28 10:01:40 crc kubenswrapper[4735]: I0128 10:01:40.177479 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-khsht" event={"ID":"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c","Type":"ContainerStarted","Data":"095b46f485573009f0e56308bdf1435113400c91317b028dd131db7a9ab55cf9"} Jan 28 10:01:40 crc kubenswrapper[4735]: I0128 10:01:40.181043 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jn5j5" event={"ID":"abbfe5cf-00af-4a6b-921e-401f549a7c8e","Type":"ContainerStarted","Data":"06d5cf8ada5c991ca25c70487193cb8739e17d60959bd4cddb2596ece2c16006"} Jan 28 10:01:40 crc kubenswrapper[4735]: I0128 10:01:40.201520 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-69fccdbfc9-k2hlg" podStartSLOduration=2.201501919 podStartE2EDuration="2.201501919s" podCreationTimestamp="2026-01-28 10:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:01:40.197545291 +0000 UTC m=+1153.347209674" watchObservedRunningTime="2026-01-28 10:01:40.201501919 +0000 UTC m=+1153.351166292" Jan 28 10:01:40 crc kubenswrapper[4735]: I0128 10:01:40.233377 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-khsht" podStartSLOduration=3.446769856 podStartE2EDuration="47.233356353s" podCreationTimestamp="2026-01-28 10:00:53 +0000 UTC" firstStartedPulling="2026-01-28 10:00:55.241855793 +0000 UTC m=+1108.391520166" lastFinishedPulling="2026-01-28 10:01:39.02844229 +0000 UTC m=+1152.178106663" observedRunningTime="2026-01-28 10:01:40.214719531 +0000 UTC m=+1153.364383914" watchObservedRunningTime="2026-01-28 10:01:40.233356353 +0000 UTC m=+1153.383020726" Jan 28 10:01:40 crc kubenswrapper[4735]: I0128 10:01:40.244811 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-jn5j5" podStartSLOduration=3.282657159 podStartE2EDuration="47.244786046s" podCreationTimestamp="2026-01-28 10:00:53 +0000 UTC" firstStartedPulling="2026-01-28 10:00:55.065905204 +0000 UTC m=+1108.215569577" lastFinishedPulling="2026-01-28 10:01:39.028034081 +0000 UTC m=+1152.177698464" observedRunningTime="2026-01-28 10:01:40.237815972 +0000 UTC m=+1153.387480345" watchObservedRunningTime="2026-01-28 10:01:40.244786046 +0000 UTC m=+1153.394450439" Jan 28 10:01:40 crc kubenswrapper[4735]: I0128 10:01:40.822230 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 10:01:40 crc kubenswrapper[4735]: I0128 10:01:40.822547 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 10:01:40 crc kubenswrapper[4735]: I0128 10:01:40.877325 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 10:01:40 crc kubenswrapper[4735]: I0128 10:01:40.888512 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 10:01:41 crc kubenswrapper[4735]: I0128 10:01:41.189111 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 10:01:41 crc kubenswrapper[4735]: I0128 10:01:41.189166 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 10:01:41 crc kubenswrapper[4735]: I0128 10:01:41.406196 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 10:01:41 crc kubenswrapper[4735]: I0128 10:01:41.406505 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 10:01:41 crc kubenswrapper[4735]: I0128 10:01:41.442002 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 10:01:41 crc kubenswrapper[4735]: I0128 10:01:41.455778 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 10:01:42 crc kubenswrapper[4735]: I0128 10:01:42.198426 4735 generic.go:334] "Generic (PLEG): container finished" podID="1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c" containerID="095b46f485573009f0e56308bdf1435113400c91317b028dd131db7a9ab55cf9" exitCode=0 Jan 28 10:01:42 crc kubenswrapper[4735]: I0128 10:01:42.198568 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-khsht" event={"ID":"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c","Type":"ContainerDied","Data":"095b46f485573009f0e56308bdf1435113400c91317b028dd131db7a9ab55cf9"} Jan 28 10:01:42 crc kubenswrapper[4735]: I0128 10:01:42.198710 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 10:01:42 crc kubenswrapper[4735]: I0128 10:01:42.199000 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 10:01:42 crc kubenswrapper[4735]: I0128 10:01:42.397395 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-bdcb89768-499m5" podUID="160ae085-44c0-4cef-9854-b0686547b43c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 28 10:01:42 crc kubenswrapper[4735]: I0128 10:01:42.512726 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-55bcb7d5f8-xkjl6" podUID="9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 28 10:01:43 crc kubenswrapper[4735]: I0128 10:01:43.154420 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 10:01:43 crc kubenswrapper[4735]: I0128 10:01:43.155206 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 10:01:44 crc kubenswrapper[4735]: I0128 10:01:44.276595 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 10:01:45 crc kubenswrapper[4735]: I0128 10:01:45.153926 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 10:01:46 crc kubenswrapper[4735]: I0128 10:01:46.243607 4735 generic.go:334] "Generic (PLEG): container finished" podID="abbfe5cf-00af-4a6b-921e-401f549a7c8e" containerID="06d5cf8ada5c991ca25c70487193cb8739e17d60959bd4cddb2596ece2c16006" exitCode=0 Jan 28 10:01:46 crc kubenswrapper[4735]: I0128 10:01:46.243645 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jn5j5" event={"ID":"abbfe5cf-00af-4a6b-921e-401f549a7c8e","Type":"ContainerDied","Data":"06d5cf8ada5c991ca25c70487193cb8739e17d60959bd4cddb2596ece2c16006"} Jan 28 10:01:47 crc kubenswrapper[4735]: I0128 10:01:47.813336 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:01:47 crc kubenswrapper[4735]: I0128 10:01:47.909082 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-khsht" Jan 28 10:01:47 crc kubenswrapper[4735]: I0128 10:01:47.918770 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzljk\" (UniqueName: \"kubernetes.io/projected/abbfe5cf-00af-4a6b-921e-401f549a7c8e-kube-api-access-hzljk\") pod \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " Jan 28 10:01:47 crc kubenswrapper[4735]: I0128 10:01:47.918905 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-config-data\") pod \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " Jan 28 10:01:47 crc kubenswrapper[4735]: I0128 10:01:47.918965 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-db-sync-config-data\") pod \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " Jan 28 10:01:47 crc kubenswrapper[4735]: I0128 10:01:47.918981 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abbfe5cf-00af-4a6b-921e-401f549a7c8e-etc-machine-id\") pod \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " Jan 28 10:01:47 crc kubenswrapper[4735]: I0128 10:01:47.918997 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-combined-ca-bundle\") pod \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " Jan 28 10:01:47 crc kubenswrapper[4735]: I0128 10:01:47.919099 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-scripts\") pod \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\" (UID: \"abbfe5cf-00af-4a6b-921e-401f549a7c8e\") " Jan 28 10:01:47 crc kubenswrapper[4735]: I0128 10:01:47.919534 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abbfe5cf-00af-4a6b-921e-401f549a7c8e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "abbfe5cf-00af-4a6b-921e-401f549a7c8e" (UID: "abbfe5cf-00af-4a6b-921e-401f549a7c8e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 10:01:47 crc kubenswrapper[4735]: I0128 10:01:47.925575 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-scripts" (OuterVolumeSpecName: "scripts") pod "abbfe5cf-00af-4a6b-921e-401f549a7c8e" (UID: "abbfe5cf-00af-4a6b-921e-401f549a7c8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:47 crc kubenswrapper[4735]: I0128 10:01:47.936086 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abbfe5cf-00af-4a6b-921e-401f549a7c8e-kube-api-access-hzljk" (OuterVolumeSpecName: "kube-api-access-hzljk") pod "abbfe5cf-00af-4a6b-921e-401f549a7c8e" (UID: "abbfe5cf-00af-4a6b-921e-401f549a7c8e"). InnerVolumeSpecName "kube-api-access-hzljk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:01:47 crc kubenswrapper[4735]: I0128 10:01:47.950479 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "abbfe5cf-00af-4a6b-921e-401f549a7c8e" (UID: "abbfe5cf-00af-4a6b-921e-401f549a7c8e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:47 crc kubenswrapper[4735]: I0128 10:01:47.962807 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abbfe5cf-00af-4a6b-921e-401f549a7c8e" (UID: "abbfe5cf-00af-4a6b-921e-401f549a7c8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:47 crc kubenswrapper[4735]: I0128 10:01:47.999239 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-config-data" (OuterVolumeSpecName: "config-data") pod "abbfe5cf-00af-4a6b-921e-401f549a7c8e" (UID: "abbfe5cf-00af-4a6b-921e-401f549a7c8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.020356 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-db-sync-config-data\") pod \"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c\" (UID: \"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c\") " Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.020427 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-combined-ca-bundle\") pod \"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c\" (UID: \"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c\") " Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.020489 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86jk2\" (UniqueName: \"kubernetes.io/projected/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-kube-api-access-86jk2\") pod \"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c\" (UID: \"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c\") " Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.020898 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.020924 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzljk\" (UniqueName: \"kubernetes.io/projected/abbfe5cf-00af-4a6b-921e-401f549a7c8e-kube-api-access-hzljk\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.020938 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.020948 4735 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.020956 4735 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abbfe5cf-00af-4a6b-921e-401f549a7c8e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.020964 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbfe5cf-00af-4a6b-921e-401f549a7c8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.023402 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c" (UID: "1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.023853 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-kube-api-access-86jk2" (OuterVolumeSpecName: "kube-api-access-86jk2") pod "1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c" (UID: "1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c"). InnerVolumeSpecName "kube-api-access-86jk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.043706 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c" (UID: "1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:48 crc kubenswrapper[4735]: E0128 10:01:48.083372 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.122466 4735 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.122664 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.122726 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86jk2\" (UniqueName: \"kubernetes.io/projected/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c-kube-api-access-86jk2\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.261588 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jn5j5" event={"ID":"abbfe5cf-00af-4a6b-921e-401f549a7c8e","Type":"ContainerDied","Data":"4c54851fccbe81ec9c5c415f300c461673de26151f72b3bbeca418b4171b4975"} Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.261641 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c54851fccbe81ec9c5c415f300c461673de26151f72b3bbeca418b4171b4975" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.261604 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jn5j5" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.264441 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca","Type":"ContainerStarted","Data":"dbad1be2739746b741690eed7ce99b24fc90eb666970b16b6ef2a87fdf6218cb"} Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.264726 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" containerName="ceilometer-notification-agent" containerID="cri-o://df5ac638833d768c2a8821cb0ac33ae08bb78d88782b3cb8356a0f5a3f5fa87b" gracePeriod=30 Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.264772 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" containerName="proxy-httpd" containerID="cri-o://dbad1be2739746b741690eed7ce99b24fc90eb666970b16b6ef2a87fdf6218cb" gracePeriod=30 Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.264806 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" containerName="sg-core" containerID="cri-o://ab418fb85d7785dca6ec8936aba30fb114f6bdfe44bc49ca636f66a95caa6551" gracePeriod=30 Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.267261 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-khsht" event={"ID":"1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c","Type":"ContainerDied","Data":"8beede2516c30c42f042e4d5e0ea02425381614899dd962c7f570fd14319c29a"} Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.267287 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8beede2516c30c42f042e4d5e0ea02425381614899dd962c7f570fd14319c29a" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.267369 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-khsht" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.549305 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 10:01:48 crc kubenswrapper[4735]: E0128 10:01:48.550282 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abbfe5cf-00af-4a6b-921e-401f549a7c8e" containerName="cinder-db-sync" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.550363 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="abbfe5cf-00af-4a6b-921e-401f549a7c8e" containerName="cinder-db-sync" Jan 28 10:01:48 crc kubenswrapper[4735]: E0128 10:01:48.550480 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c" containerName="barbican-db-sync" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.550549 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c" containerName="barbican-db-sync" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.550815 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c" containerName="barbican-db-sync" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.550941 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="abbfe5cf-00af-4a6b-921e-401f549a7c8e" containerName="cinder-db-sync" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.552277 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.557789 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.558024 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.558142 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.558315 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wx4vf" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.566278 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.641147 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d68b9cb4c-6kkkn"] Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.642522 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.666138 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d68b9cb4c-6kkkn"] Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.736670 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-dns-swift-storage-0\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.736718 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-ovsdbserver-nb\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.736794 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-config-data\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.736816 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-dns-svc\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.736851 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nls7w\" (UniqueName: \"kubernetes.io/projected/75e466dc-86b8-4072-b778-be90de2ad570-kube-api-access-nls7w\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.736876 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-scripts\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.736896 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-ovsdbserver-sb\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.736931 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.736969 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfbp6\" (UniqueName: \"kubernetes.io/projected/3735b551-2777-4a92-a82b-d99a12f9dba8-kube-api-access-qfbp6\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.736987 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.737009 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3735b551-2777-4a92-a82b-d99a12f9dba8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.737028 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-config\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.817208 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.824713 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.829284 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.832458 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.840173 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfbp6\" (UniqueName: \"kubernetes.io/projected/3735b551-2777-4a92-a82b-d99a12f9dba8-kube-api-access-qfbp6\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.840222 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.840261 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3735b551-2777-4a92-a82b-d99a12f9dba8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.840296 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-config\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.840356 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-dns-swift-storage-0\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.840387 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-ovsdbserver-nb\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.840532 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-config-data\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.840568 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-dns-svc\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.840602 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nls7w\" (UniqueName: \"kubernetes.io/projected/75e466dc-86b8-4072-b778-be90de2ad570-kube-api-access-nls7w\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.840653 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-scripts\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.840689 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-ovsdbserver-sb\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.840788 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.842050 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3735b551-2777-4a92-a82b-d99a12f9dba8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.842257 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-dns-svc\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.842990 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-ovsdbserver-nb\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.845179 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-dns-swift-storage-0\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.845933 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-config\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.846427 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.849338 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-ovsdbserver-sb\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.859965 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-config-data\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.865877 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.865993 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-scripts\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.870305 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfbp6\" (UniqueName: \"kubernetes.io/projected/3735b551-2777-4a92-a82b-d99a12f9dba8-kube-api-access-qfbp6\") pod \"cinder-scheduler-0\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.871612 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nls7w\" (UniqueName: \"kubernetes.io/projected/75e466dc-86b8-4072-b778-be90de2ad570-kube-api-access-nls7w\") pod \"dnsmasq-dns-d68b9cb4c-6kkkn\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.888444 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.942308 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.942388 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5pt6\" (UniqueName: \"kubernetes.io/projected/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-kube-api-access-h5pt6\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.942412 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-config-data-custom\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.942458 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-logs\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.942490 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-scripts\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.942504 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-config-data\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.942516 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:48 crc kubenswrapper[4735]: I0128 10:01:48.967684 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.045944 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.046219 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5pt6\" (UniqueName: \"kubernetes.io/projected/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-kube-api-access-h5pt6\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.046244 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-config-data-custom\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.046295 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-logs\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.046325 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-scripts\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.046338 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-config-data\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.046354 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.048260 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.054522 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.055138 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-config-data-custom\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.056807 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-logs\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.060217 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-scripts\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.062336 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-config-data\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.077120 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5pt6\" (UniqueName: \"kubernetes.io/projected/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-kube-api-access-h5pt6\") pod \"cinder-api-0\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " pod="openstack/cinder-api-0" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.238923 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-79b69bb499-htdgz"] Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.250715 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-79b69bb499-htdgz" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.253224 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-j2vph" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.253620 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.253843 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.259309 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-74c4c54ddb-zk4nc"] Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.261119 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.263078 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.276097 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-79b69bb499-htdgz"] Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.293736 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.311074 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-74c4c54ddb-zk4nc"] Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.323938 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d68b9cb4c-6kkkn"] Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.351224 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-84dcc47884-72t49"] Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.354007 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.355730 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83266f31-73f1-4b43-94c1-ceeb9ee128b3-config-data\") pod \"barbican-worker-79b69bb499-htdgz\" (UID: \"83266f31-73f1-4b43-94c1-ceeb9ee128b3\") " pod="openstack/barbican-worker-79b69bb499-htdgz" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.355792 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83266f31-73f1-4b43-94c1-ceeb9ee128b3-logs\") pod \"barbican-worker-79b69bb499-htdgz\" (UID: \"83266f31-73f1-4b43-94c1-ceeb9ee128b3\") " pod="openstack/barbican-worker-79b69bb499-htdgz" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.355812 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgxvf\" (UniqueName: \"kubernetes.io/projected/83266f31-73f1-4b43-94c1-ceeb9ee128b3-kube-api-access-pgxvf\") pod \"barbican-worker-79b69bb499-htdgz\" (UID: \"83266f31-73f1-4b43-94c1-ceeb9ee128b3\") " pod="openstack/barbican-worker-79b69bb499-htdgz" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.355856 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83266f31-73f1-4b43-94c1-ceeb9ee128b3-config-data-custom\") pod \"barbican-worker-79b69bb499-htdgz\" (UID: \"83266f31-73f1-4b43-94c1-ceeb9ee128b3\") " pod="openstack/barbican-worker-79b69bb499-htdgz" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.355885 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t47w5\" (UniqueName: \"kubernetes.io/projected/039cbbea-550c-4d71-96b2-6e06fcbc8916-kube-api-access-t47w5\") pod \"barbican-keystone-listener-74c4c54ddb-zk4nc\" (UID: \"039cbbea-550c-4d71-96b2-6e06fcbc8916\") " pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.355903 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/039cbbea-550c-4d71-96b2-6e06fcbc8916-config-data\") pod \"barbican-keystone-listener-74c4c54ddb-zk4nc\" (UID: \"039cbbea-550c-4d71-96b2-6e06fcbc8916\") " pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.355921 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/039cbbea-550c-4d71-96b2-6e06fcbc8916-combined-ca-bundle\") pod \"barbican-keystone-listener-74c4c54ddb-zk4nc\" (UID: \"039cbbea-550c-4d71-96b2-6e06fcbc8916\") " pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.355938 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/039cbbea-550c-4d71-96b2-6e06fcbc8916-config-data-custom\") pod \"barbican-keystone-listener-74c4c54ddb-zk4nc\" (UID: \"039cbbea-550c-4d71-96b2-6e06fcbc8916\") " pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.355966 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/039cbbea-550c-4d71-96b2-6e06fcbc8916-logs\") pod \"barbican-keystone-listener-74c4c54ddb-zk4nc\" (UID: \"039cbbea-550c-4d71-96b2-6e06fcbc8916\") " pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.356027 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83266f31-73f1-4b43-94c1-ceeb9ee128b3-combined-ca-bundle\") pod \"barbican-worker-79b69bb499-htdgz\" (UID: \"83266f31-73f1-4b43-94c1-ceeb9ee128b3\") " pod="openstack/barbican-worker-79b69bb499-htdgz" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.356340 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.361534 4735 generic.go:334] "Generic (PLEG): container finished" podID="f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" containerID="dbad1be2739746b741690eed7ce99b24fc90eb666970b16b6ef2a87fdf6218cb" exitCode=0 Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.361558 4735 generic.go:334] "Generic (PLEG): container finished" podID="f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" containerID="ab418fb85d7785dca6ec8936aba30fb114f6bdfe44bc49ca636f66a95caa6551" exitCode=2 Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.361580 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca","Type":"ContainerDied","Data":"dbad1be2739746b741690eed7ce99b24fc90eb666970b16b6ef2a87fdf6218cb"} Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.361604 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca","Type":"ContainerDied","Data":"ab418fb85d7785dca6ec8936aba30fb114f6bdfe44bc49ca636f66a95caa6551"} Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.372226 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84dcc47884-72t49"] Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.389507 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-95lqd"] Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.393630 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.412711 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-95lqd"] Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.457853 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83266f31-73f1-4b43-94c1-ceeb9ee128b3-combined-ca-bundle\") pod \"barbican-worker-79b69bb499-htdgz\" (UID: \"83266f31-73f1-4b43-94c1-ceeb9ee128b3\") " pod="openstack/barbican-worker-79b69bb499-htdgz" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.457926 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-config-data\") pod \"barbican-api-84dcc47884-72t49\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.457959 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83266f31-73f1-4b43-94c1-ceeb9ee128b3-config-data\") pod \"barbican-worker-79b69bb499-htdgz\" (UID: \"83266f31-73f1-4b43-94c1-ceeb9ee128b3\") " pod="openstack/barbican-worker-79b69bb499-htdgz" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.457984 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83266f31-73f1-4b43-94c1-ceeb9ee128b3-logs\") pod \"barbican-worker-79b69bb499-htdgz\" (UID: \"83266f31-73f1-4b43-94c1-ceeb9ee128b3\") " pod="openstack/barbican-worker-79b69bb499-htdgz" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.458011 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28705939-381c-428c-8b71-1cab75747dc3-logs\") pod \"barbican-api-84dcc47884-72t49\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.458031 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgxvf\" (UniqueName: \"kubernetes.io/projected/83266f31-73f1-4b43-94c1-ceeb9ee128b3-kube-api-access-pgxvf\") pod \"barbican-worker-79b69bb499-htdgz\" (UID: \"83266f31-73f1-4b43-94c1-ceeb9ee128b3\") " pod="openstack/barbican-worker-79b69bb499-htdgz" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.458068 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83266f31-73f1-4b43-94c1-ceeb9ee128b3-config-data-custom\") pod \"barbican-worker-79b69bb499-htdgz\" (UID: \"83266f31-73f1-4b43-94c1-ceeb9ee128b3\") " pod="openstack/barbican-worker-79b69bb499-htdgz" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.458096 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t47w5\" (UniqueName: \"kubernetes.io/projected/039cbbea-550c-4d71-96b2-6e06fcbc8916-kube-api-access-t47w5\") pod \"barbican-keystone-listener-74c4c54ddb-zk4nc\" (UID: \"039cbbea-550c-4d71-96b2-6e06fcbc8916\") " pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.458113 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-combined-ca-bundle\") pod \"barbican-api-84dcc47884-72t49\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.458133 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/039cbbea-550c-4d71-96b2-6e06fcbc8916-config-data\") pod \"barbican-keystone-listener-74c4c54ddb-zk4nc\" (UID: \"039cbbea-550c-4d71-96b2-6e06fcbc8916\") " pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.458149 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wdk6\" (UniqueName: \"kubernetes.io/projected/28705939-381c-428c-8b71-1cab75747dc3-kube-api-access-8wdk6\") pod \"barbican-api-84dcc47884-72t49\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.458167 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/039cbbea-550c-4d71-96b2-6e06fcbc8916-combined-ca-bundle\") pod \"barbican-keystone-listener-74c4c54ddb-zk4nc\" (UID: \"039cbbea-550c-4d71-96b2-6e06fcbc8916\") " pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.458185 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/039cbbea-550c-4d71-96b2-6e06fcbc8916-config-data-custom\") pod \"barbican-keystone-listener-74c4c54ddb-zk4nc\" (UID: \"039cbbea-550c-4d71-96b2-6e06fcbc8916\") " pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.458211 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/039cbbea-550c-4d71-96b2-6e06fcbc8916-logs\") pod \"barbican-keystone-listener-74c4c54ddb-zk4nc\" (UID: \"039cbbea-550c-4d71-96b2-6e06fcbc8916\") " pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.458250 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-config-data-custom\") pod \"barbican-api-84dcc47884-72t49\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.460195 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83266f31-73f1-4b43-94c1-ceeb9ee128b3-logs\") pod \"barbican-worker-79b69bb499-htdgz\" (UID: \"83266f31-73f1-4b43-94c1-ceeb9ee128b3\") " pod="openstack/barbican-worker-79b69bb499-htdgz" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.463111 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83266f31-73f1-4b43-94c1-ceeb9ee128b3-config-data-custom\") pod \"barbican-worker-79b69bb499-htdgz\" (UID: \"83266f31-73f1-4b43-94c1-ceeb9ee128b3\") " pod="openstack/barbican-worker-79b69bb499-htdgz" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.463179 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/039cbbea-550c-4d71-96b2-6e06fcbc8916-config-data-custom\") pod \"barbican-keystone-listener-74c4c54ddb-zk4nc\" (UID: \"039cbbea-550c-4d71-96b2-6e06fcbc8916\") " pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.463367 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/039cbbea-550c-4d71-96b2-6e06fcbc8916-logs\") pod \"barbican-keystone-listener-74c4c54ddb-zk4nc\" (UID: \"039cbbea-550c-4d71-96b2-6e06fcbc8916\") " pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.463512 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/039cbbea-550c-4d71-96b2-6e06fcbc8916-config-data\") pod \"barbican-keystone-listener-74c4c54ddb-zk4nc\" (UID: \"039cbbea-550c-4d71-96b2-6e06fcbc8916\") " pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.470117 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/039cbbea-550c-4d71-96b2-6e06fcbc8916-combined-ca-bundle\") pod \"barbican-keystone-listener-74c4c54ddb-zk4nc\" (UID: \"039cbbea-550c-4d71-96b2-6e06fcbc8916\") " pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.472641 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83266f31-73f1-4b43-94c1-ceeb9ee128b3-combined-ca-bundle\") pod \"barbican-worker-79b69bb499-htdgz\" (UID: \"83266f31-73f1-4b43-94c1-ceeb9ee128b3\") " pod="openstack/barbican-worker-79b69bb499-htdgz" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.473417 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83266f31-73f1-4b43-94c1-ceeb9ee128b3-config-data\") pod \"barbican-worker-79b69bb499-htdgz\" (UID: \"83266f31-73f1-4b43-94c1-ceeb9ee128b3\") " pod="openstack/barbican-worker-79b69bb499-htdgz" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.474980 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t47w5\" (UniqueName: \"kubernetes.io/projected/039cbbea-550c-4d71-96b2-6e06fcbc8916-kube-api-access-t47w5\") pod \"barbican-keystone-listener-74c4c54ddb-zk4nc\" (UID: \"039cbbea-550c-4d71-96b2-6e06fcbc8916\") " pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.484571 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgxvf\" (UniqueName: \"kubernetes.io/projected/83266f31-73f1-4b43-94c1-ceeb9ee128b3-kube-api-access-pgxvf\") pod \"barbican-worker-79b69bb499-htdgz\" (UID: \"83266f31-73f1-4b43-94c1-ceeb9ee128b3\") " pod="openstack/barbican-worker-79b69bb499-htdgz" Jan 28 10:01:49 crc kubenswrapper[4735]: W0128 10:01:49.521850 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3735b551_2777_4a92_a82b_d99a12f9dba8.slice/crio-c68becd57a04e38392607f931b931c19a7ede0572d013b47b1a66c158e4cc7c5 WatchSource:0}: Error finding container c68becd57a04e38392607f931b931c19a7ede0572d013b47b1a66c158e4cc7c5: Status 404 returned error can't find the container with id c68becd57a04e38392607f931b931c19a7ede0572d013b47b1a66c158e4cc7c5 Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.528167 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.560103 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd4cm\" (UniqueName: \"kubernetes.io/projected/312624db-583e-4c95-b802-248457108339-kube-api-access-fd4cm\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.560274 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-config-data\") pod \"barbican-api-84dcc47884-72t49\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.560441 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28705939-381c-428c-8b71-1cab75747dc3-logs\") pod \"barbican-api-84dcc47884-72t49\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.560658 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-dns-svc\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.560698 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-combined-ca-bundle\") pod \"barbican-api-84dcc47884-72t49\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.560774 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wdk6\" (UniqueName: \"kubernetes.io/projected/28705939-381c-428c-8b71-1cab75747dc3-kube-api-access-8wdk6\") pod \"barbican-api-84dcc47884-72t49\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.560802 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-config\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.561180 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.561316 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.561395 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.561477 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28705939-381c-428c-8b71-1cab75747dc3-logs\") pod \"barbican-api-84dcc47884-72t49\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.561481 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-config-data-custom\") pod \"barbican-api-84dcc47884-72t49\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.564751 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-combined-ca-bundle\") pod \"barbican-api-84dcc47884-72t49\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.565740 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-config-data\") pod \"barbican-api-84dcc47884-72t49\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.572407 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-config-data-custom\") pod \"barbican-api-84dcc47884-72t49\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.577609 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wdk6\" (UniqueName: \"kubernetes.io/projected/28705939-381c-428c-8b71-1cab75747dc3-kube-api-access-8wdk6\") pod \"barbican-api-84dcc47884-72t49\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.585985 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-79b69bb499-htdgz" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.601117 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.663206 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd4cm\" (UniqueName: \"kubernetes.io/projected/312624db-583e-4c95-b802-248457108339-kube-api-access-fd4cm\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.663397 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-dns-svc\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.663426 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-config\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.663451 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.663492 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.663512 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.664521 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.667023 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-dns-svc\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.668172 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-config\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.669078 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.669586 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.680926 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd4cm\" (UniqueName: \"kubernetes.io/projected/312624db-583e-4c95-b802-248457108339-kube-api-access-fd4cm\") pod \"dnsmasq-dns-5784cf869f-95lqd\" (UID: \"312624db-583e-4c95-b802-248457108339\") " pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.703084 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.725981 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.769982 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d68b9cb4c-6kkkn"] Jan 28 10:01:49 crc kubenswrapper[4735]: I0128 10:01:49.897668 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 10:01:49 crc kubenswrapper[4735]: W0128 10:01:49.930464 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e7f70cc_e193_4eff_a4a3_02bf65d1a672.slice/crio-9d2061e04a8316cdf396c8d85141252956d7aae94db04b4157287f5b2de33d74 WatchSource:0}: Error finding container 9d2061e04a8316cdf396c8d85141252956d7aae94db04b4157287f5b2de33d74: Status 404 returned error can't find the container with id 9d2061e04a8316cdf396c8d85141252956d7aae94db04b4157287f5b2de33d74 Jan 28 10:01:50 crc kubenswrapper[4735]: I0128 10:01:50.165553 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-74c4c54ddb-zk4nc"] Jan 28 10:01:50 crc kubenswrapper[4735]: I0128 10:01:50.313974 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84dcc47884-72t49"] Jan 28 10:01:50 crc kubenswrapper[4735]: W0128 10:01:50.326621 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83266f31_73f1_4b43_94c1_ceeb9ee128b3.slice/crio-e710863e161b3871c69192dd991e811ea81c317d955579279a2e17b90e0ab6f3 WatchSource:0}: Error finding container e710863e161b3871c69192dd991e811ea81c317d955579279a2e17b90e0ab6f3: Status 404 returned error can't find the container with id e710863e161b3871c69192dd991e811ea81c317d955579279a2e17b90e0ab6f3 Jan 28 10:01:50 crc kubenswrapper[4735]: I0128 10:01:50.327228 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-79b69bb499-htdgz"] Jan 28 10:01:50 crc kubenswrapper[4735]: I0128 10:01:50.374104 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84dcc47884-72t49" event={"ID":"28705939-381c-428c-8b71-1cab75747dc3","Type":"ContainerStarted","Data":"ba791be2d2e0a20be70d31758af606c2292d4bbd22b37e775f8a94669a61a9c6"} Jan 28 10:01:50 crc kubenswrapper[4735]: I0128 10:01:50.376772 4735 generic.go:334] "Generic (PLEG): container finished" podID="75e466dc-86b8-4072-b778-be90de2ad570" containerID="bd009fa47d228c593ad4c64861684cd104f562c18bfccc40bbfdb3b7b8d3c70c" exitCode=0 Jan 28 10:01:50 crc kubenswrapper[4735]: I0128 10:01:50.376842 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" event={"ID":"75e466dc-86b8-4072-b778-be90de2ad570","Type":"ContainerDied","Data":"bd009fa47d228c593ad4c64861684cd104f562c18bfccc40bbfdb3b7b8d3c70c"} Jan 28 10:01:50 crc kubenswrapper[4735]: I0128 10:01:50.376868 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" event={"ID":"75e466dc-86b8-4072-b778-be90de2ad570","Type":"ContainerStarted","Data":"dddf2936ba59062eb66dfef199cd68a839c0188391662a4835fc998b50881567"} Jan 28 10:01:50 crc kubenswrapper[4735]: I0128 10:01:50.379025 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e7f70cc-e193-4eff-a4a3-02bf65d1a672","Type":"ContainerStarted","Data":"9d2061e04a8316cdf396c8d85141252956d7aae94db04b4157287f5b2de33d74"} Jan 28 10:01:50 crc kubenswrapper[4735]: I0128 10:01:50.380882 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3735b551-2777-4a92-a82b-d99a12f9dba8","Type":"ContainerStarted","Data":"c68becd57a04e38392607f931b931c19a7ede0572d013b47b1a66c158e4cc7c5"} Jan 28 10:01:50 crc kubenswrapper[4735]: I0128 10:01:50.383099 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-79b69bb499-htdgz" event={"ID":"83266f31-73f1-4b43-94c1-ceeb9ee128b3","Type":"ContainerStarted","Data":"e710863e161b3871c69192dd991e811ea81c317d955579279a2e17b90e0ab6f3"} Jan 28 10:01:50 crc kubenswrapper[4735]: I0128 10:01:50.387077 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" event={"ID":"039cbbea-550c-4d71-96b2-6e06fcbc8916","Type":"ContainerStarted","Data":"11262f95a94a70b78c7c180fc5c804880476b3d215afda5075289b13cda9fcc4"} Jan 28 10:01:50 crc kubenswrapper[4735]: I0128 10:01:50.428922 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-95lqd"] Jan 28 10:01:50 crc kubenswrapper[4735]: W0128 10:01:50.430356 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod312624db_583e_4c95_b802_248457108339.slice/crio-5f9d773895810d8b59da6beb70264e707f8a4b97db0e3e6ca956df432c80e744 WatchSource:0}: Error finding container 5f9d773895810d8b59da6beb70264e707f8a4b97db0e3e6ca956df432c80e744: Status 404 returned error can't find the container with id 5f9d773895810d8b59da6beb70264e707f8a4b97db0e3e6ca956df432c80e744 Jan 28 10:01:50 crc kubenswrapper[4735]: I0128 10:01:50.711303 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 10:01:50 crc kubenswrapper[4735]: I0128 10:01:50.987933 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.036411 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nls7w\" (UniqueName: \"kubernetes.io/projected/75e466dc-86b8-4072-b778-be90de2ad570-kube-api-access-nls7w\") pod \"75e466dc-86b8-4072-b778-be90de2ad570\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.036471 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-ovsdbserver-nb\") pod \"75e466dc-86b8-4072-b778-be90de2ad570\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.036591 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-config\") pod \"75e466dc-86b8-4072-b778-be90de2ad570\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.036628 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-dns-swift-storage-0\") pod \"75e466dc-86b8-4072-b778-be90de2ad570\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.036705 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-dns-svc\") pod \"75e466dc-86b8-4072-b778-be90de2ad570\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.036761 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-ovsdbserver-sb\") pod \"75e466dc-86b8-4072-b778-be90de2ad570\" (UID: \"75e466dc-86b8-4072-b778-be90de2ad570\") " Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.065477 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75e466dc-86b8-4072-b778-be90de2ad570-kube-api-access-nls7w" (OuterVolumeSpecName: "kube-api-access-nls7w") pod "75e466dc-86b8-4072-b778-be90de2ad570" (UID: "75e466dc-86b8-4072-b778-be90de2ad570"). InnerVolumeSpecName "kube-api-access-nls7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.103471 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-config" (OuterVolumeSpecName: "config") pod "75e466dc-86b8-4072-b778-be90de2ad570" (UID: "75e466dc-86b8-4072-b778-be90de2ad570"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.103607 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75e466dc-86b8-4072-b778-be90de2ad570" (UID: "75e466dc-86b8-4072-b778-be90de2ad570"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.104173 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "75e466dc-86b8-4072-b778-be90de2ad570" (UID: "75e466dc-86b8-4072-b778-be90de2ad570"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.112360 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "75e466dc-86b8-4072-b778-be90de2ad570" (UID: "75e466dc-86b8-4072-b778-be90de2ad570"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.128308 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "75e466dc-86b8-4072-b778-be90de2ad570" (UID: "75e466dc-86b8-4072-b778-be90de2ad570"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.139906 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.139932 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.139943 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.139952 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.139962 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nls7w\" (UniqueName: \"kubernetes.io/projected/75e466dc-86b8-4072-b778-be90de2ad570-kube-api-access-nls7w\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.139970 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75e466dc-86b8-4072-b778-be90de2ad570-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.398690 4735 generic.go:334] "Generic (PLEG): container finished" podID="f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" containerID="df5ac638833d768c2a8821cb0ac33ae08bb78d88782b3cb8356a0f5a3f5fa87b" exitCode=0 Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.398878 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca","Type":"ContainerDied","Data":"df5ac638833d768c2a8821cb0ac33ae08bb78d88782b3cb8356a0f5a3f5fa87b"} Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.408109 4735 generic.go:334] "Generic (PLEG): container finished" podID="312624db-583e-4c95-b802-248457108339" containerID="c9d09508a0ab9f918f4f9a216cd9723fb49c0424b2deb1cb6f25856e643b653c" exitCode=0 Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.408213 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-95lqd" event={"ID":"312624db-583e-4c95-b802-248457108339","Type":"ContainerDied","Data":"c9d09508a0ab9f918f4f9a216cd9723fb49c0424b2deb1cb6f25856e643b653c"} Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.408260 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-95lqd" event={"ID":"312624db-583e-4c95-b802-248457108339","Type":"ContainerStarted","Data":"5f9d773895810d8b59da6beb70264e707f8a4b97db0e3e6ca956df432c80e744"} Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.411822 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84dcc47884-72t49" event={"ID":"28705939-381c-428c-8b71-1cab75747dc3","Type":"ContainerStarted","Data":"c3b0479d66f581ff80871df5f49f894e8b2dbbee9103a614b9cbc384aedaa1c7"} Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.411861 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84dcc47884-72t49" event={"ID":"28705939-381c-428c-8b71-1cab75747dc3","Type":"ContainerStarted","Data":"99f414f9ffc9be16ac74924d33c4ed5ff0b9de076a5fa941301d456020d03568"} Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.412152 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.412182 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.415654 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" event={"ID":"75e466dc-86b8-4072-b778-be90de2ad570","Type":"ContainerDied","Data":"dddf2936ba59062eb66dfef199cd68a839c0188391662a4835fc998b50881567"} Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.415697 4735 scope.go:117] "RemoveContainer" containerID="bd009fa47d228c593ad4c64861684cd104f562c18bfccc40bbfdb3b7b8d3c70c" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.415846 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d68b9cb4c-6kkkn" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.433684 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e7f70cc-e193-4eff-a4a3-02bf65d1a672","Type":"ContainerStarted","Data":"1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a"} Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.457409 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-84dcc47884-72t49" podStartSLOduration=2.457394218 podStartE2EDuration="2.457394218s" podCreationTimestamp="2026-01-28 10:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:01:51.454415921 +0000 UTC m=+1164.604080314" watchObservedRunningTime="2026-01-28 10:01:51.457394218 +0000 UTC m=+1164.607058591" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.568114 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d68b9cb4c-6kkkn"] Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.575426 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d68b9cb4c-6kkkn"] Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.608874 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.758946 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-combined-ca-bundle\") pod \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.759006 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-sg-core-conf-yaml\") pod \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.759157 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-scripts\") pod \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.759192 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-run-httpd\") pod \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.759791 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" (UID: "f3242296-d82b-45e7-9d2b-c8f8c63ee2ca"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.766201 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-scripts" (OuterVolumeSpecName: "scripts") pod "f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" (UID: "f3242296-d82b-45e7-9d2b-c8f8c63ee2ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.767855 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98k45\" (UniqueName: \"kubernetes.io/projected/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-kube-api-access-98k45\") pod \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.768360 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-log-httpd\") pod \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.768446 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-config-data\") pod \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\" (UID: \"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca\") " Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.769288 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" (UID: "f3242296-d82b-45e7-9d2b-c8f8c63ee2ca"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.769635 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.769651 4735 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.769663 4735 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.773285 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-kube-api-access-98k45" (OuterVolumeSpecName: "kube-api-access-98k45") pod "f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" (UID: "f3242296-d82b-45e7-9d2b-c8f8c63ee2ca"). InnerVolumeSpecName "kube-api-access-98k45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.804266 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" (UID: "f3242296-d82b-45e7-9d2b-c8f8c63ee2ca"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.851356 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" (UID: "f3242296-d82b-45e7-9d2b-c8f8c63ee2ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.874569 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.874600 4735 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.874609 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98k45\" (UniqueName: \"kubernetes.io/projected/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-kube-api-access-98k45\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.884689 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-config-data" (OuterVolumeSpecName: "config-data") pod "f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" (UID: "f3242296-d82b-45e7-9d2b-c8f8c63ee2ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:51 crc kubenswrapper[4735]: I0128 10:01:51.975886 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.457557 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3242296-d82b-45e7-9d2b-c8f8c63ee2ca","Type":"ContainerDied","Data":"e184ab7f80096e6cce59b9cebc724c723eb6b6a7116e3635ebea9f1935e1b121"} Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.457883 4735 scope.go:117] "RemoveContainer" containerID="dbad1be2739746b741690eed7ce99b24fc90eb666970b16b6ef2a87fdf6218cb" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.457597 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.459854 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-95lqd" event={"ID":"312624db-583e-4c95-b802-248457108339","Type":"ContainerStarted","Data":"8ffe5e6da487c465e9f8a6c14646a04b374b6d9a576fc79b03f9315dbaf622ce"} Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.459935 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.464996 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e7f70cc-e193-4eff-a4a3-02bf65d1a672","Type":"ContainerStarted","Data":"30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1"} Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.465039 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8e7f70cc-e193-4eff-a4a3-02bf65d1a672" containerName="cinder-api-log" containerID="cri-o://1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a" gracePeriod=30 Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.465102 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.465116 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8e7f70cc-e193-4eff-a4a3-02bf65d1a672" containerName="cinder-api" containerID="cri-o://30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1" gracePeriod=30 Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.485533 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-95lqd" podStartSLOduration=3.48551525 podStartE2EDuration="3.48551525s" podCreationTimestamp="2026-01-28 10:01:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:01:52.479469414 +0000 UTC m=+1165.629133777" watchObservedRunningTime="2026-01-28 10:01:52.48551525 +0000 UTC m=+1165.635179623" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.503658 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3735b551-2777-4a92-a82b-d99a12f9dba8","Type":"ContainerStarted","Data":"ea5c76d6a48bb86a41f5690dc114c2dd5b5e6d43ab98c8ba5093528b5b67d6dc"} Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.503702 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3735b551-2777-4a92-a82b-d99a12f9dba8","Type":"ContainerStarted","Data":"baf3c183f2d5de161d5aeadd7978035e5678a17c4953ea99e5e299a3cc054c2d"} Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.505212 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.50519757 podStartE2EDuration="4.50519757s" podCreationTimestamp="2026-01-28 10:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:01:52.502848715 +0000 UTC m=+1165.652513088" watchObservedRunningTime="2026-01-28 10:01:52.50519757 +0000 UTC m=+1165.654861943" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.554394 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.566784 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.577307 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:01:52 crc kubenswrapper[4735]: E0128 10:01:52.577844 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" containerName="proxy-httpd" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.577869 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" containerName="proxy-httpd" Jan 28 10:01:52 crc kubenswrapper[4735]: E0128 10:01:52.577894 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" containerName="sg-core" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.577902 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" containerName="sg-core" Jan 28 10:01:52 crc kubenswrapper[4735]: E0128 10:01:52.577928 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75e466dc-86b8-4072-b778-be90de2ad570" containerName="init" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.577937 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="75e466dc-86b8-4072-b778-be90de2ad570" containerName="init" Jan 28 10:01:52 crc kubenswrapper[4735]: E0128 10:01:52.577948 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" containerName="ceilometer-notification-agent" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.577956 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" containerName="ceilometer-notification-agent" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.578156 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" containerName="proxy-httpd" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.578192 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="75e466dc-86b8-4072-b778-be90de2ad570" containerName="init" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.578222 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" containerName="ceilometer-notification-agent" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.578235 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" containerName="sg-core" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.580095 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.585442 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.586859 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.586981 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/596f1b95-b8a4-4444-a984-a26a031f595f-log-httpd\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.587009 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/596f1b95-b8a4-4444-a984-a26a031f595f-run-httpd\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.587058 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.587091 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drzk5\" (UniqueName: \"kubernetes.io/projected/596f1b95-b8a4-4444-a984-a26a031f595f-kube-api-access-drzk5\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.587114 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-config-data\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.587141 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-scripts\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.588144 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.590691 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.745223541 podStartE2EDuration="4.590678307s" podCreationTimestamp="2026-01-28 10:01:48 +0000 UTC" firstStartedPulling="2026-01-28 10:01:49.523241785 +0000 UTC m=+1162.672906158" lastFinishedPulling="2026-01-28 10:01:50.368696551 +0000 UTC m=+1163.518360924" observedRunningTime="2026-01-28 10:01:52.567200025 +0000 UTC m=+1165.716864398" watchObservedRunningTime="2026-01-28 10:01:52.590678307 +0000 UTC m=+1165.740342680" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.645871 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.688492 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.688603 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/596f1b95-b8a4-4444-a984-a26a031f595f-log-httpd\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.688630 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/596f1b95-b8a4-4444-a984-a26a031f595f-run-httpd\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.688669 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.688707 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drzk5\" (UniqueName: \"kubernetes.io/projected/596f1b95-b8a4-4444-a984-a26a031f595f-kube-api-access-drzk5\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.688741 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-config-data\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.688797 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-scripts\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.691236 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/596f1b95-b8a4-4444-a984-a26a031f595f-log-httpd\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.691279 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/596f1b95-b8a4-4444-a984-a26a031f595f-run-httpd\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.697163 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-scripts\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.697410 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.699611 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-config-data\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.709958 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drzk5\" (UniqueName: \"kubernetes.io/projected/596f1b95-b8a4-4444-a984-a26a031f595f-kube-api-access-drzk5\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.714372 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.905952 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.909121 4735 scope.go:117] "RemoveContainer" containerID="ab418fb85d7785dca6ec8936aba30fb114f6bdfe44bc49ca636f66a95caa6551" Jan 28 10:01:52 crc kubenswrapper[4735]: I0128 10:01:52.959939 4735 scope.go:117] "RemoveContainer" containerID="df5ac638833d768c2a8821cb0ac33ae08bb78d88782b3cb8356a0f5a3f5fa87b" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.270710 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.410276 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-combined-ca-bundle\") pod \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.410704 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5pt6\" (UniqueName: \"kubernetes.io/projected/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-kube-api-access-h5pt6\") pod \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.410781 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-scripts\") pod \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.410823 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-etc-machine-id\") pod \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.410849 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-config-data-custom\") pod \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.410917 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-config-data\") pod \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.410984 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-logs\") pod \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\" (UID: \"8e7f70cc-e193-4eff-a4a3-02bf65d1a672\") " Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.413092 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8e7f70cc-e193-4eff-a4a3-02bf65d1a672" (UID: "8e7f70cc-e193-4eff-a4a3-02bf65d1a672"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.413397 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-logs" (OuterVolumeSpecName: "logs") pod "8e7f70cc-e193-4eff-a4a3-02bf65d1a672" (UID: "8e7f70cc-e193-4eff-a4a3-02bf65d1a672"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.418275 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-scripts" (OuterVolumeSpecName: "scripts") pod "8e7f70cc-e193-4eff-a4a3-02bf65d1a672" (UID: "8e7f70cc-e193-4eff-a4a3-02bf65d1a672"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.418311 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-kube-api-access-h5pt6" (OuterVolumeSpecName: "kube-api-access-h5pt6") pod "8e7f70cc-e193-4eff-a4a3-02bf65d1a672" (UID: "8e7f70cc-e193-4eff-a4a3-02bf65d1a672"). InnerVolumeSpecName "kube-api-access-h5pt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.418389 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8e7f70cc-e193-4eff-a4a3-02bf65d1a672" (UID: "8e7f70cc-e193-4eff-a4a3-02bf65d1a672"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.459106 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e7f70cc-e193-4eff-a4a3-02bf65d1a672" (UID: "8e7f70cc-e193-4eff-a4a3-02bf65d1a672"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.492938 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-config-data" (OuterVolumeSpecName: "config-data") pod "8e7f70cc-e193-4eff-a4a3-02bf65d1a672" (UID: "8e7f70cc-e193-4eff-a4a3-02bf65d1a672"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.494509 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.513906 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5pt6\" (UniqueName: \"kubernetes.io/projected/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-kube-api-access-h5pt6\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.513934 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.513945 4735 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.513953 4735 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.513961 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.513969 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.513979 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e7f70cc-e193-4eff-a4a3-02bf65d1a672-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.523977 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75e466dc-86b8-4072-b778-be90de2ad570" path="/var/lib/kubelet/pods/75e466dc-86b8-4072-b778-be90de2ad570/volumes" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.524552 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3242296-d82b-45e7-9d2b-c8f8c63ee2ca" path="/var/lib/kubelet/pods/f3242296-d82b-45e7-9d2b-c8f8c63ee2ca/volumes" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.544101 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-79b69bb499-htdgz" event={"ID":"83266f31-73f1-4b43-94c1-ceeb9ee128b3","Type":"ContainerStarted","Data":"55807c342eb33b2c55abe06586a77b7b6674e82f40db514eeb5562b92543ebbb"} Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.544164 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-79b69bb499-htdgz" event={"ID":"83266f31-73f1-4b43-94c1-ceeb9ee128b3","Type":"ContainerStarted","Data":"493564ff0a188edfb24236ee00214a63563caa7a917d901a6ef999233358f989"} Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.550134 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"596f1b95-b8a4-4444-a984-a26a031f595f","Type":"ContainerStarted","Data":"e4a879d51bc6d8247dd499442cdd902b604dae97dbbecb3d464313eabffce65e"} Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.554884 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" event={"ID":"039cbbea-550c-4d71-96b2-6e06fcbc8916","Type":"ContainerStarted","Data":"d8acba57d2da248712a6fcc25f674e527ec1a3d6b928eeea0abff7ccaeae3558"} Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.554938 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" event={"ID":"039cbbea-550c-4d71-96b2-6e06fcbc8916","Type":"ContainerStarted","Data":"b1acb786eb09a72f1d0b0fbfe0a78e2f460d73e673f5b384f1b9a68f9c97565e"} Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.560669 4735 generic.go:334] "Generic (PLEG): container finished" podID="8e7f70cc-e193-4eff-a4a3-02bf65d1a672" containerID="30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1" exitCode=0 Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.560707 4735 generic.go:334] "Generic (PLEG): container finished" podID="8e7f70cc-e193-4eff-a4a3-02bf65d1a672" containerID="1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a" exitCode=143 Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.560869 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e7f70cc-e193-4eff-a4a3-02bf65d1a672","Type":"ContainerDied","Data":"30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1"} Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.560943 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e7f70cc-e193-4eff-a4a3-02bf65d1a672","Type":"ContainerDied","Data":"1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a"} Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.560961 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e7f70cc-e193-4eff-a4a3-02bf65d1a672","Type":"ContainerDied","Data":"9d2061e04a8316cdf396c8d85141252956d7aae94db04b4157287f5b2de33d74"} Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.560983 4735 scope.go:117] "RemoveContainer" containerID="30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.561189 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.574279 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-79b69bb499-htdgz" podStartSLOduration=2.021282252 podStartE2EDuration="4.574256212s" podCreationTimestamp="2026-01-28 10:01:49 +0000 UTC" firstStartedPulling="2026-01-28 10:01:50.359629921 +0000 UTC m=+1163.509294284" lastFinishedPulling="2026-01-28 10:01:52.912603871 +0000 UTC m=+1166.062268244" observedRunningTime="2026-01-28 10:01:53.566642504 +0000 UTC m=+1166.716306867" watchObservedRunningTime="2026-01-28 10:01:53.574256212 +0000 UTC m=+1166.723920585" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.613331 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-74c4c54ddb-zk4nc" podStartSLOduration=2.011416313 podStartE2EDuration="4.613308484s" podCreationTimestamp="2026-01-28 10:01:49 +0000 UTC" firstStartedPulling="2026-01-28 10:01:50.310497156 +0000 UTC m=+1163.460161529" lastFinishedPulling="2026-01-28 10:01:52.912389337 +0000 UTC m=+1166.062053700" observedRunningTime="2026-01-28 10:01:53.601100629 +0000 UTC m=+1166.750765002" watchObservedRunningTime="2026-01-28 10:01:53.613308484 +0000 UTC m=+1166.762972857" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.628007 4735 scope.go:117] "RemoveContainer" containerID="1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.640567 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.673362 4735 scope.go:117] "RemoveContainer" containerID="30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1" Jan 28 10:01:53 crc kubenswrapper[4735]: E0128 10:01:53.673695 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1\": container with ID starting with 30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1 not found: ID does not exist" containerID="30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.673723 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1"} err="failed to get container status \"30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1\": rpc error: code = NotFound desc = could not find container \"30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1\": container with ID starting with 30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1 not found: ID does not exist" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.673746 4735 scope.go:117] "RemoveContainer" containerID="1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a" Jan 28 10:01:53 crc kubenswrapper[4735]: E0128 10:01:53.674223 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a\": container with ID starting with 1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a not found: ID does not exist" containerID="1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.674242 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a"} err="failed to get container status \"1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a\": rpc error: code = NotFound desc = could not find container \"1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a\": container with ID starting with 1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a not found: ID does not exist" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.674256 4735 scope.go:117] "RemoveContainer" containerID="30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.674447 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1"} err="failed to get container status \"30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1\": rpc error: code = NotFound desc = could not find container \"30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1\": container with ID starting with 30e91e7a4de81f4ce6bfe0d9090bfd7611221c8ed2d89bfe5c735dff4596dfe1 not found: ID does not exist" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.674461 4735 scope.go:117] "RemoveContainer" containerID="1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.674654 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a"} err="failed to get container status \"1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a\": rpc error: code = NotFound desc = could not find container \"1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a\": container with ID starting with 1ce1608d6fe7e4cf7e0cf8534616722585e67e512fe158398fbc364123d6b53a not found: ID does not exist" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.684680 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.695819 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 10:01:53 crc kubenswrapper[4735]: E0128 10:01:53.696217 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e7f70cc-e193-4eff-a4a3-02bf65d1a672" containerName="cinder-api-log" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.696236 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e7f70cc-e193-4eff-a4a3-02bf65d1a672" containerName="cinder-api-log" Jan 28 10:01:53 crc kubenswrapper[4735]: E0128 10:01:53.696260 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e7f70cc-e193-4eff-a4a3-02bf65d1a672" containerName="cinder-api" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.696269 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e7f70cc-e193-4eff-a4a3-02bf65d1a672" containerName="cinder-api" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.696440 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e7f70cc-e193-4eff-a4a3-02bf65d1a672" containerName="cinder-api-log" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.696456 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e7f70cc-e193-4eff-a4a3-02bf65d1a672" containerName="cinder-api" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.697343 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.698955 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.699119 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.700314 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.701867 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.821055 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-scripts\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.821671 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-config-data\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.821708 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.821854 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fab6043-5ed8-4885-94a2-f26820e9609c-logs\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.821913 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-config-data-custom\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.821960 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.822008 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fab6043-5ed8-4885-94a2-f26820e9609c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.822036 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.822117 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw54w\" (UniqueName: \"kubernetes.io/projected/3fab6043-5ed8-4885-94a2-f26820e9609c-kube-api-access-qw54w\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.888777 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.923846 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-config-data-custom\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.923912 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.923949 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fab6043-5ed8-4885-94a2-f26820e9609c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.923974 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.924022 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw54w\" (UniqueName: \"kubernetes.io/projected/3fab6043-5ed8-4885-94a2-f26820e9609c-kube-api-access-qw54w\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.924051 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-scripts\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.924067 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-config-data\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.924085 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.924122 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fab6043-5ed8-4885-94a2-f26820e9609c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.924154 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fab6043-5ed8-4885-94a2-f26820e9609c-logs\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.924632 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fab6043-5ed8-4885-94a2-f26820e9609c-logs\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.940788 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-scripts\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.941984 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-config-data-custom\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.942381 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.942726 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.943059 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.947553 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw54w\" (UniqueName: \"kubernetes.io/projected/3fab6043-5ed8-4885-94a2-f26820e9609c-kube-api-access-qw54w\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:53 crc kubenswrapper[4735]: I0128 10:01:53.947900 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fab6043-5ed8-4885-94a2-f26820e9609c-config-data\") pod \"cinder-api-0\" (UID: \"3fab6043-5ed8-4885-94a2-f26820e9609c\") " pod="openstack/cinder-api-0" Jan 28 10:01:54 crc kubenswrapper[4735]: I0128 10:01:54.097192 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 10:01:54 crc kubenswrapper[4735]: I0128 10:01:54.593112 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"596f1b95-b8a4-4444-a984-a26a031f595f","Type":"ContainerStarted","Data":"ea7e5355d48de3dd3bcce24663dcc55f3d37fd900bcf436eef7e37bf90ae3d5f"} Jan 28 10:01:54 crc kubenswrapper[4735]: I0128 10:01:54.623119 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 10:01:54 crc kubenswrapper[4735]: I0128 10:01:54.657641 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:01:54 crc kubenswrapper[4735]: I0128 10:01:54.670008 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.018948 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7f67d6d798-h9vrm"] Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.020612 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.025258 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.026126 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.048640 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f67d6d798-h9vrm"] Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.164331 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-public-tls-certs\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.165062 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-internal-tls-certs\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.165100 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2vhl\" (UniqueName: \"kubernetes.io/projected/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-kube-api-access-c2vhl\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.165313 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-combined-ca-bundle\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.165342 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-logs\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.165389 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-config-data\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.165413 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-config-data-custom\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.266846 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-combined-ca-bundle\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.266903 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-logs\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.266956 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-config-data\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.266975 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-config-data-custom\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.267032 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-public-tls-certs\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.267054 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-internal-tls-certs\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.267198 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2vhl\" (UniqueName: \"kubernetes.io/projected/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-kube-api-access-c2vhl\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.268016 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-logs\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.279475 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-combined-ca-bundle\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.279435 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-public-tls-certs\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.279490 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-config-data-custom\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.279827 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-internal-tls-certs\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.283607 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-config-data\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.284951 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2vhl\" (UniqueName: \"kubernetes.io/projected/f38494ad-ec5a-4426-9d2c-6ef3c6ad877b-kube-api-access-c2vhl\") pod \"barbican-api-7f67d6d798-h9vrm\" (UID: \"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b\") " pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.348314 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.537797 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e7f70cc-e193-4eff-a4a3-02bf65d1a672" path="/var/lib/kubelet/pods/8e7f70cc-e193-4eff-a4a3-02bf65d1a672/volumes" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.607049 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"596f1b95-b8a4-4444-a984-a26a031f595f","Type":"ContainerStarted","Data":"db7ad47da7352b46ea621b1d70414da77d31a0951a5218d75192861cdbb8a649"} Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.623167 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3fab6043-5ed8-4885-94a2-f26820e9609c","Type":"ContainerStarted","Data":"ceb1e807a0ae0291fe3ff26e57590c45a5ab45b8cccb04a5196974fc1a08f046"} Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.623210 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3fab6043-5ed8-4885-94a2-f26820e9609c","Type":"ContainerStarted","Data":"972cc99b255b536efaf964f217ab8b741f45140cc27f3fdbea1e6fd37b421c79"} Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.787453 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:01:55 crc kubenswrapper[4735]: I0128 10:01:55.852725 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f67d6d798-h9vrm"] Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.060050 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b87b5c6d9-94mvs"] Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.060343 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b87b5c6d9-94mvs" podUID="58725e47-7476-44f5-ad7b-ccd74d87aace" containerName="neutron-api" containerID="cri-o://0a5c80ad9426a0d0a37d5a944e8198d59694c0ac9c16b1f9ffe0548679eff916" gracePeriod=30 Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.060979 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b87b5c6d9-94mvs" podUID="58725e47-7476-44f5-ad7b-ccd74d87aace" containerName="neutron-httpd" containerID="cri-o://84bf958264be2859afd85256b7f45a800d1744fd4d844071ef35aee0af786613" gracePeriod=30 Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.083024 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-b87b5c6d9-94mvs" podUID="58725e47-7476-44f5-ad7b-ccd74d87aace" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9696/\": read tcp 10.217.0.2:37064->10.217.0.155:9696: read: connection reset by peer" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.084048 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77978c49d9-227fz"] Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.085583 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.098354 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77978c49d9-227fz"] Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.188857 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-combined-ca-bundle\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.188929 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-internal-tls-certs\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.188953 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-public-tls-certs\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.188979 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz5nl\" (UniqueName: \"kubernetes.io/projected/585b2106-bf58-400c-8036-bd0220288f1f-kube-api-access-sz5nl\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.189015 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-ovndb-tls-certs\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.189042 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-config\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.189060 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-httpd-config\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.302896 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-internal-tls-certs\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.302944 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-public-tls-certs\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.302982 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz5nl\" (UniqueName: \"kubernetes.io/projected/585b2106-bf58-400c-8036-bd0220288f1f-kube-api-access-sz5nl\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.303040 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-ovndb-tls-certs\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.303075 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-config\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.303093 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-httpd-config\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.305010 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-combined-ca-bundle\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.308178 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-ovndb-tls-certs\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.309186 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-internal-tls-certs\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.309833 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-public-tls-certs\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.309905 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-httpd-config\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.315327 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-config\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.315705 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/585b2106-bf58-400c-8036-bd0220288f1f-combined-ca-bundle\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.322276 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz5nl\" (UniqueName: \"kubernetes.io/projected/585b2106-bf58-400c-8036-bd0220288f1f-kube-api-access-sz5nl\") pod \"neutron-77978c49d9-227fz\" (UID: \"585b2106-bf58-400c-8036-bd0220288f1f\") " pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.520787 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.664037 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f67d6d798-h9vrm" event={"ID":"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b","Type":"ContainerStarted","Data":"7a37d3bd14780e6c12e41b43788eb14f39dae31d18e949bcf5defc160eae0e39"} Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.664378 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f67d6d798-h9vrm" event={"ID":"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b","Type":"ContainerStarted","Data":"66d4097be2c4a1015522fcddc6055686e47733e324d695c72ebfca753e4dbd1a"} Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.664391 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f67d6d798-h9vrm" event={"ID":"f38494ad-ec5a-4426-9d2c-6ef3c6ad877b","Type":"ContainerStarted","Data":"781a2309c1b6605e05ea443ef8f490ee04f83d7b631067f794e030fe6b993bff"} Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.664421 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.664439 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.673902 4735 generic.go:334] "Generic (PLEG): container finished" podID="58725e47-7476-44f5-ad7b-ccd74d87aace" containerID="84bf958264be2859afd85256b7f45a800d1744fd4d844071ef35aee0af786613" exitCode=0 Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.673961 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b87b5c6d9-94mvs" event={"ID":"58725e47-7476-44f5-ad7b-ccd74d87aace","Type":"ContainerDied","Data":"84bf958264be2859afd85256b7f45a800d1744fd4d844071ef35aee0af786613"} Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.683050 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7f67d6d798-h9vrm" podStartSLOduration=2.68303232 podStartE2EDuration="2.68303232s" podCreationTimestamp="2026-01-28 10:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:01:56.681517161 +0000 UTC m=+1169.831181534" watchObservedRunningTime="2026-01-28 10:01:56.68303232 +0000 UTC m=+1169.832696693" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.692956 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"596f1b95-b8a4-4444-a984-a26a031f595f","Type":"ContainerStarted","Data":"1852a9c62fabef7bae41aced4696c3c478016b4841b1977083d92206389863c9"} Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.694686 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3fab6043-5ed8-4885-94a2-f26820e9609c","Type":"ContainerStarted","Data":"c9d6ddb65528463313214e07420f0e2de9decd7414e416dedda06b30d45fe73e"} Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.695559 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.720305 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.720283188 podStartE2EDuration="3.720283188s" podCreationTimestamp="2026-01-28 10:01:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:01:56.719089185 +0000 UTC m=+1169.868753558" watchObservedRunningTime="2026-01-28 10:01:56.720283188 +0000 UTC m=+1169.869947561" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.828260 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-55bcb7d5f8-xkjl6" Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.899711 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bdcb89768-499m5"] Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.899943 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bdcb89768-499m5" podUID="160ae085-44c0-4cef-9854-b0686547b43c" containerName="horizon-log" containerID="cri-o://dbec602550a61cd364029656e02575fa14e978705c0893330895bcbcbd87ac47" gracePeriod=30 Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.900346 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bdcb89768-499m5" podUID="160ae085-44c0-4cef-9854-b0686547b43c" containerName="horizon" containerID="cri-o://8367909f7f9f94db3f14f8c070c49e1d38bee064d5e8eeb3fa42c652c31be1f9" gracePeriod=30 Jan 28 10:01:56 crc kubenswrapper[4735]: I0128 10:01:56.907382 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-bdcb89768-499m5" podUID="160ae085-44c0-4cef-9854-b0686547b43c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 28 10:01:57 crc kubenswrapper[4735]: I0128 10:01:57.171849 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77978c49d9-227fz"] Jan 28 10:01:57 crc kubenswrapper[4735]: I0128 10:01:57.704401 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77978c49d9-227fz" event={"ID":"585b2106-bf58-400c-8036-bd0220288f1f","Type":"ContainerStarted","Data":"2542b9b8a19156c101c75b9ee05b031f7b4038e0c269618c15b9492213ac1a1a"} Jan 28 10:01:57 crc kubenswrapper[4735]: I0128 10:01:57.704821 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77978c49d9-227fz" event={"ID":"585b2106-bf58-400c-8036-bd0220288f1f","Type":"ContainerStarted","Data":"58ad8f95c6a4c5099df8cdd2a4eb7fd84e7ec7eb024959c62742764252f62859"} Jan 28 10:01:57 crc kubenswrapper[4735]: I0128 10:01:57.704840 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77978c49d9-227fz" event={"ID":"585b2106-bf58-400c-8036-bd0220288f1f","Type":"ContainerStarted","Data":"73799e72d3b8c6329c4af417a51da6d2919cf7d0d120cbe205f7bc1d5d3ad784"} Jan 28 10:01:57 crc kubenswrapper[4735]: I0128 10:01:57.737880 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77978c49d9-227fz" podStartSLOduration=1.7378577480000001 podStartE2EDuration="1.737857748s" podCreationTimestamp="2026-01-28 10:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:01:57.729176839 +0000 UTC m=+1170.878841212" watchObservedRunningTime="2026-01-28 10:01:57.737857748 +0000 UTC m=+1170.887522131" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.363349 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.454039 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhszw\" (UniqueName: \"kubernetes.io/projected/58725e47-7476-44f5-ad7b-ccd74d87aace-kube-api-access-zhszw\") pod \"58725e47-7476-44f5-ad7b-ccd74d87aace\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.454154 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-httpd-config\") pod \"58725e47-7476-44f5-ad7b-ccd74d87aace\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.454217 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-combined-ca-bundle\") pod \"58725e47-7476-44f5-ad7b-ccd74d87aace\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.454243 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-internal-tls-certs\") pod \"58725e47-7476-44f5-ad7b-ccd74d87aace\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.454293 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-config\") pod \"58725e47-7476-44f5-ad7b-ccd74d87aace\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.454324 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-public-tls-certs\") pod \"58725e47-7476-44f5-ad7b-ccd74d87aace\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.454348 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-ovndb-tls-certs\") pod \"58725e47-7476-44f5-ad7b-ccd74d87aace\" (UID: \"58725e47-7476-44f5-ad7b-ccd74d87aace\") " Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.462070 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "58725e47-7476-44f5-ad7b-ccd74d87aace" (UID: "58725e47-7476-44f5-ad7b-ccd74d87aace"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.462485 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58725e47-7476-44f5-ad7b-ccd74d87aace-kube-api-access-zhszw" (OuterVolumeSpecName: "kube-api-access-zhszw") pod "58725e47-7476-44f5-ad7b-ccd74d87aace" (UID: "58725e47-7476-44f5-ad7b-ccd74d87aace"). InnerVolumeSpecName "kube-api-access-zhszw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.546045 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58725e47-7476-44f5-ad7b-ccd74d87aace" (UID: "58725e47-7476-44f5-ad7b-ccd74d87aace"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.549370 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "58725e47-7476-44f5-ad7b-ccd74d87aace" (UID: "58725e47-7476-44f5-ad7b-ccd74d87aace"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.557393 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhszw\" (UniqueName: \"kubernetes.io/projected/58725e47-7476-44f5-ad7b-ccd74d87aace-kube-api-access-zhszw\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.557434 4735 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.557445 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.557454 4735 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.559650 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "58725e47-7476-44f5-ad7b-ccd74d87aace" (UID: "58725e47-7476-44f5-ad7b-ccd74d87aace"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.568216 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-config" (OuterVolumeSpecName: "config") pod "58725e47-7476-44f5-ad7b-ccd74d87aace" (UID: "58725e47-7476-44f5-ad7b-ccd74d87aace"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.568685 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "58725e47-7476-44f5-ad7b-ccd74d87aace" (UID: "58725e47-7476-44f5-ad7b-ccd74d87aace"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.658485 4735 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.658520 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.658532 4735 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58725e47-7476-44f5-ad7b-ccd74d87aace-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.719090 4735 generic.go:334] "Generic (PLEG): container finished" podID="58725e47-7476-44f5-ad7b-ccd74d87aace" containerID="0a5c80ad9426a0d0a37d5a944e8198d59694c0ac9c16b1f9ffe0548679eff916" exitCode=0 Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.719199 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b87b5c6d9-94mvs" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.719209 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b87b5c6d9-94mvs" event={"ID":"58725e47-7476-44f5-ad7b-ccd74d87aace","Type":"ContainerDied","Data":"0a5c80ad9426a0d0a37d5a944e8198d59694c0ac9c16b1f9ffe0548679eff916"} Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.719295 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b87b5c6d9-94mvs" event={"ID":"58725e47-7476-44f5-ad7b-ccd74d87aace","Type":"ContainerDied","Data":"6dac877d29445cdc0afee428c767abcbffeccdf8bb1a08eb8a713fa933631116"} Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.719325 4735 scope.go:117] "RemoveContainer" containerID="84bf958264be2859afd85256b7f45a800d1744fd4d844071ef35aee0af786613" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.725853 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"596f1b95-b8a4-4444-a984-a26a031f595f","Type":"ContainerStarted","Data":"33a8c175b16d39a6034160f0ff9b7d51331fd54ca21e5c11c3695e5813b98a6a"} Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.725907 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.725926 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.774263 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.517128581 podStartE2EDuration="6.774241279s" podCreationTimestamp="2026-01-28 10:01:52 +0000 UTC" firstStartedPulling="2026-01-28 10:01:53.498474321 +0000 UTC m=+1166.648138694" lastFinishedPulling="2026-01-28 10:01:57.755587019 +0000 UTC m=+1170.905251392" observedRunningTime="2026-01-28 10:01:58.756041738 +0000 UTC m=+1171.905706121" watchObservedRunningTime="2026-01-28 10:01:58.774241279 +0000 UTC m=+1171.923905662" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.832129 4735 scope.go:117] "RemoveContainer" containerID="0a5c80ad9426a0d0a37d5a944e8198d59694c0ac9c16b1f9ffe0548679eff916" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.854183 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b87b5c6d9-94mvs"] Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.860914 4735 scope.go:117] "RemoveContainer" containerID="84bf958264be2859afd85256b7f45a800d1744fd4d844071ef35aee0af786613" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.861448 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b87b5c6d9-94mvs"] Jan 28 10:01:58 crc kubenswrapper[4735]: E0128 10:01:58.861530 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84bf958264be2859afd85256b7f45a800d1744fd4d844071ef35aee0af786613\": container with ID starting with 84bf958264be2859afd85256b7f45a800d1744fd4d844071ef35aee0af786613 not found: ID does not exist" containerID="84bf958264be2859afd85256b7f45a800d1744fd4d844071ef35aee0af786613" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.861570 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bf958264be2859afd85256b7f45a800d1744fd4d844071ef35aee0af786613"} err="failed to get container status \"84bf958264be2859afd85256b7f45a800d1744fd4d844071ef35aee0af786613\": rpc error: code = NotFound desc = could not find container \"84bf958264be2859afd85256b7f45a800d1744fd4d844071ef35aee0af786613\": container with ID starting with 84bf958264be2859afd85256b7f45a800d1744fd4d844071ef35aee0af786613 not found: ID does not exist" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.861600 4735 scope.go:117] "RemoveContainer" containerID="0a5c80ad9426a0d0a37d5a944e8198d59694c0ac9c16b1f9ffe0548679eff916" Jan 28 10:01:58 crc kubenswrapper[4735]: E0128 10:01:58.861974 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a5c80ad9426a0d0a37d5a944e8198d59694c0ac9c16b1f9ffe0548679eff916\": container with ID starting with 0a5c80ad9426a0d0a37d5a944e8198d59694c0ac9c16b1f9ffe0548679eff916 not found: ID does not exist" containerID="0a5c80ad9426a0d0a37d5a944e8198d59694c0ac9c16b1f9ffe0548679eff916" Jan 28 10:01:58 crc kubenswrapper[4735]: I0128 10:01:58.862003 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5c80ad9426a0d0a37d5a944e8198d59694c0ac9c16b1f9ffe0548679eff916"} err="failed to get container status \"0a5c80ad9426a0d0a37d5a944e8198d59694c0ac9c16b1f9ffe0548679eff916\": rpc error: code = NotFound desc = could not find container \"0a5c80ad9426a0d0a37d5a944e8198d59694c0ac9c16b1f9ffe0548679eff916\": container with ID starting with 0a5c80ad9426a0d0a37d5a944e8198d59694c0ac9c16b1f9ffe0548679eff916 not found: ID does not exist" Jan 28 10:01:59 crc kubenswrapper[4735]: I0128 10:01:59.166473 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 10:01:59 crc kubenswrapper[4735]: I0128 10:01:59.249941 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 10:01:59 crc kubenswrapper[4735]: I0128 10:01:59.513161 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58725e47-7476-44f5-ad7b-ccd74d87aace" path="/var/lib/kubelet/pods/58725e47-7476-44f5-ad7b-ccd74d87aace/volumes" Jan 28 10:01:59 crc kubenswrapper[4735]: I0128 10:01:59.730894 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:01:59 crc kubenswrapper[4735]: I0128 10:01:59.733623 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3735b551-2777-4a92-a82b-d99a12f9dba8" containerName="cinder-scheduler" containerID="cri-o://baf3c183f2d5de161d5aeadd7978035e5678a17c4953ea99e5e299a3cc054c2d" gracePeriod=30 Jan 28 10:01:59 crc kubenswrapper[4735]: I0128 10:01:59.733970 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3735b551-2777-4a92-a82b-d99a12f9dba8" containerName="probe" containerID="cri-o://ea5c76d6a48bb86a41f5690dc114c2dd5b5e6d43ab98c8ba5093528b5b67d6dc" gracePeriod=30 Jan 28 10:01:59 crc kubenswrapper[4735]: I0128 10:01:59.800329 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-bctgx"] Jan 28 10:01:59 crc kubenswrapper[4735]: I0128 10:01:59.800597 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" podUID="09a3f6e2-b56d-4a42-9505-fb4b86f60150" containerName="dnsmasq-dns" containerID="cri-o://cbc14182a9ecb3a5c555623ef20d59abfad59a42e52a77f0592d54d89dfa0d30" gracePeriod=10 Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.213924 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-bdcb89768-499m5" podUID="160ae085-44c0-4cef-9854-b0686547b43c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:34180->10.217.0.150:8443: read: connection reset by peer" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.375977 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.396494 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-dns-svc\") pod \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.396593 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-ovsdbserver-sb\") pod \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.396634 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-config\") pod \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.396808 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-dns-swift-storage-0\") pod \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.396890 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dxdb\" (UniqueName: \"kubernetes.io/projected/09a3f6e2-b56d-4a42-9505-fb4b86f60150-kube-api-access-7dxdb\") pod \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.396923 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-ovsdbserver-nb\") pod \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\" (UID: \"09a3f6e2-b56d-4a42-9505-fb4b86f60150\") " Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.410178 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09a3f6e2-b56d-4a42-9505-fb4b86f60150-kube-api-access-7dxdb" (OuterVolumeSpecName: "kube-api-access-7dxdb") pod "09a3f6e2-b56d-4a42-9505-fb4b86f60150" (UID: "09a3f6e2-b56d-4a42-9505-fb4b86f60150"). InnerVolumeSpecName "kube-api-access-7dxdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.501060 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dxdb\" (UniqueName: \"kubernetes.io/projected/09a3f6e2-b56d-4a42-9505-fb4b86f60150-kube-api-access-7dxdb\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.513215 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09a3f6e2-b56d-4a42-9505-fb4b86f60150" (UID: "09a3f6e2-b56d-4a42-9505-fb4b86f60150"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.536316 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "09a3f6e2-b56d-4a42-9505-fb4b86f60150" (UID: "09a3f6e2-b56d-4a42-9505-fb4b86f60150"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.545440 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "09a3f6e2-b56d-4a42-9505-fb4b86f60150" (UID: "09a3f6e2-b56d-4a42-9505-fb4b86f60150"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.553162 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "09a3f6e2-b56d-4a42-9505-fb4b86f60150" (UID: "09a3f6e2-b56d-4a42-9505-fb4b86f60150"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.554139 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-config" (OuterVolumeSpecName: "config") pod "09a3f6e2-b56d-4a42-9505-fb4b86f60150" (UID: "09a3f6e2-b56d-4a42-9505-fb4b86f60150"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.603329 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.603358 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.603368 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.603378 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.603386 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09a3f6e2-b56d-4a42-9505-fb4b86f60150-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.754721 4735 generic.go:334] "Generic (PLEG): container finished" podID="3735b551-2777-4a92-a82b-d99a12f9dba8" containerID="ea5c76d6a48bb86a41f5690dc114c2dd5b5e6d43ab98c8ba5093528b5b67d6dc" exitCode=0 Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.755117 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3735b551-2777-4a92-a82b-d99a12f9dba8","Type":"ContainerDied","Data":"ea5c76d6a48bb86a41f5690dc114c2dd5b5e6d43ab98c8ba5093528b5b67d6dc"} Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.757286 4735 generic.go:334] "Generic (PLEG): container finished" podID="160ae085-44c0-4cef-9854-b0686547b43c" containerID="8367909f7f9f94db3f14f8c070c49e1d38bee064d5e8eeb3fa42c652c31be1f9" exitCode=0 Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.757363 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bdcb89768-499m5" event={"ID":"160ae085-44c0-4cef-9854-b0686547b43c","Type":"ContainerDied","Data":"8367909f7f9f94db3f14f8c070c49e1d38bee064d5e8eeb3fa42c652c31be1f9"} Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.759332 4735 generic.go:334] "Generic (PLEG): container finished" podID="09a3f6e2-b56d-4a42-9505-fb4b86f60150" containerID="cbc14182a9ecb3a5c555623ef20d59abfad59a42e52a77f0592d54d89dfa0d30" exitCode=0 Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.759376 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" event={"ID":"09a3f6e2-b56d-4a42-9505-fb4b86f60150","Type":"ContainerDied","Data":"cbc14182a9ecb3a5c555623ef20d59abfad59a42e52a77f0592d54d89dfa0d30"} Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.759402 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" event={"ID":"09a3f6e2-b56d-4a42-9505-fb4b86f60150","Type":"ContainerDied","Data":"26c338c6ff9e45d4a31b7cbc224faeb4a785656b22016b5e4e1e5cc286068690"} Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.759419 4735 scope.go:117] "RemoveContainer" containerID="cbc14182a9ecb3a5c555623ef20d59abfad59a42e52a77f0592d54d89dfa0d30" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.759431 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-bctgx" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.779686 4735 scope.go:117] "RemoveContainer" containerID="ae1504220671102fab0f309c60dbd98aaed7cbaa0100519fba163b4581b2cad3" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.797957 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-bctgx"] Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.805354 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-bctgx"] Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.813735 4735 scope.go:117] "RemoveContainer" containerID="cbc14182a9ecb3a5c555623ef20d59abfad59a42e52a77f0592d54d89dfa0d30" Jan 28 10:02:00 crc kubenswrapper[4735]: E0128 10:02:00.814267 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbc14182a9ecb3a5c555623ef20d59abfad59a42e52a77f0592d54d89dfa0d30\": container with ID starting with cbc14182a9ecb3a5c555623ef20d59abfad59a42e52a77f0592d54d89dfa0d30 not found: ID does not exist" containerID="cbc14182a9ecb3a5c555623ef20d59abfad59a42e52a77f0592d54d89dfa0d30" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.814298 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbc14182a9ecb3a5c555623ef20d59abfad59a42e52a77f0592d54d89dfa0d30"} err="failed to get container status \"cbc14182a9ecb3a5c555623ef20d59abfad59a42e52a77f0592d54d89dfa0d30\": rpc error: code = NotFound desc = could not find container \"cbc14182a9ecb3a5c555623ef20d59abfad59a42e52a77f0592d54d89dfa0d30\": container with ID starting with cbc14182a9ecb3a5c555623ef20d59abfad59a42e52a77f0592d54d89dfa0d30 not found: ID does not exist" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.814317 4735 scope.go:117] "RemoveContainer" containerID="ae1504220671102fab0f309c60dbd98aaed7cbaa0100519fba163b4581b2cad3" Jan 28 10:02:00 crc kubenswrapper[4735]: E0128 10:02:00.814796 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae1504220671102fab0f309c60dbd98aaed7cbaa0100519fba163b4581b2cad3\": container with ID starting with ae1504220671102fab0f309c60dbd98aaed7cbaa0100519fba163b4581b2cad3 not found: ID does not exist" containerID="ae1504220671102fab0f309c60dbd98aaed7cbaa0100519fba163b4581b2cad3" Jan 28 10:02:00 crc kubenswrapper[4735]: I0128 10:02:00.814838 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae1504220671102fab0f309c60dbd98aaed7cbaa0100519fba163b4581b2cad3"} err="failed to get container status \"ae1504220671102fab0f309c60dbd98aaed7cbaa0100519fba163b4581b2cad3\": rpc error: code = NotFound desc = could not find container \"ae1504220671102fab0f309c60dbd98aaed7cbaa0100519fba163b4581b2cad3\": container with ID starting with ae1504220671102fab0f309c60dbd98aaed7cbaa0100519fba163b4581b2cad3 not found: ID does not exist" Jan 28 10:02:01 crc kubenswrapper[4735]: I0128 10:02:01.249600 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:02:01 crc kubenswrapper[4735]: I0128 10:02:01.401832 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:02:01 crc kubenswrapper[4735]: I0128 10:02:01.513676 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09a3f6e2-b56d-4a42-9505-fb4b86f60150" path="/var/lib/kubelet/pods/09a3f6e2-b56d-4a42-9505-fb4b86f60150/volumes" Jan 28 10:02:02 crc kubenswrapper[4735]: I0128 10:02:02.397082 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-bdcb89768-499m5" podUID="160ae085-44c0-4cef-9854-b0686547b43c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.462821 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.677582 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3735b551-2777-4a92-a82b-d99a12f9dba8-etc-machine-id\") pod \"3735b551-2777-4a92-a82b-d99a12f9dba8\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.677686 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-scripts\") pod \"3735b551-2777-4a92-a82b-d99a12f9dba8\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.677769 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfbp6\" (UniqueName: \"kubernetes.io/projected/3735b551-2777-4a92-a82b-d99a12f9dba8-kube-api-access-qfbp6\") pod \"3735b551-2777-4a92-a82b-d99a12f9dba8\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.677803 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-config-data-custom\") pod \"3735b551-2777-4a92-a82b-d99a12f9dba8\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.677884 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-config-data\") pod \"3735b551-2777-4a92-a82b-d99a12f9dba8\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.677988 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-combined-ca-bundle\") pod \"3735b551-2777-4a92-a82b-d99a12f9dba8\" (UID: \"3735b551-2777-4a92-a82b-d99a12f9dba8\") " Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.680081 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3735b551-2777-4a92-a82b-d99a12f9dba8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3735b551-2777-4a92-a82b-d99a12f9dba8" (UID: "3735b551-2777-4a92-a82b-d99a12f9dba8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.697963 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-scripts" (OuterVolumeSpecName: "scripts") pod "3735b551-2777-4a92-a82b-d99a12f9dba8" (UID: "3735b551-2777-4a92-a82b-d99a12f9dba8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.698014 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3735b551-2777-4a92-a82b-d99a12f9dba8" (UID: "3735b551-2777-4a92-a82b-d99a12f9dba8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.699120 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3735b551-2777-4a92-a82b-d99a12f9dba8-kube-api-access-qfbp6" (OuterVolumeSpecName: "kube-api-access-qfbp6") pod "3735b551-2777-4a92-a82b-d99a12f9dba8" (UID: "3735b551-2777-4a92-a82b-d99a12f9dba8"). InnerVolumeSpecName "kube-api-access-qfbp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.733677 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3735b551-2777-4a92-a82b-d99a12f9dba8" (UID: "3735b551-2777-4a92-a82b-d99a12f9dba8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.779693 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.779722 4735 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3735b551-2777-4a92-a82b-d99a12f9dba8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.779732 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.779741 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfbp6\" (UniqueName: \"kubernetes.io/projected/3735b551-2777-4a92-a82b-d99a12f9dba8-kube-api-access-qfbp6\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.779765 4735 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.780656 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-config-data" (OuterVolumeSpecName: "config-data") pod "3735b551-2777-4a92-a82b-d99a12f9dba8" (UID: "3735b551-2777-4a92-a82b-d99a12f9dba8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.800625 4735 generic.go:334] "Generic (PLEG): container finished" podID="3735b551-2777-4a92-a82b-d99a12f9dba8" containerID="baf3c183f2d5de161d5aeadd7978035e5678a17c4953ea99e5e299a3cc054c2d" exitCode=0 Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.800670 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3735b551-2777-4a92-a82b-d99a12f9dba8","Type":"ContainerDied","Data":"baf3c183f2d5de161d5aeadd7978035e5678a17c4953ea99e5e299a3cc054c2d"} Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.800677 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.800696 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3735b551-2777-4a92-a82b-d99a12f9dba8","Type":"ContainerDied","Data":"c68becd57a04e38392607f931b931c19a7ede0572d013b47b1a66c158e4cc7c5"} Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.800715 4735 scope.go:117] "RemoveContainer" containerID="ea5c76d6a48bb86a41f5690dc114c2dd5b5e6d43ab98c8ba5093528b5b67d6dc" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.822738 4735 scope.go:117] "RemoveContainer" containerID="baf3c183f2d5de161d5aeadd7978035e5678a17c4953ea99e5e299a3cc054c2d" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.840505 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.850318 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.859686 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 10:02:03 crc kubenswrapper[4735]: E0128 10:02:03.860074 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a3f6e2-b56d-4a42-9505-fb4b86f60150" containerName="dnsmasq-dns" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.860095 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a3f6e2-b56d-4a42-9505-fb4b86f60150" containerName="dnsmasq-dns" Jan 28 10:02:03 crc kubenswrapper[4735]: E0128 10:02:03.860111 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58725e47-7476-44f5-ad7b-ccd74d87aace" containerName="neutron-api" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.860118 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="58725e47-7476-44f5-ad7b-ccd74d87aace" containerName="neutron-api" Jan 28 10:02:03 crc kubenswrapper[4735]: E0128 10:02:03.860136 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3735b551-2777-4a92-a82b-d99a12f9dba8" containerName="probe" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.860143 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="3735b551-2777-4a92-a82b-d99a12f9dba8" containerName="probe" Jan 28 10:02:03 crc kubenswrapper[4735]: E0128 10:02:03.860150 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3735b551-2777-4a92-a82b-d99a12f9dba8" containerName="cinder-scheduler" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.860157 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="3735b551-2777-4a92-a82b-d99a12f9dba8" containerName="cinder-scheduler" Jan 28 10:02:03 crc kubenswrapper[4735]: E0128 10:02:03.860168 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58725e47-7476-44f5-ad7b-ccd74d87aace" containerName="neutron-httpd" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.860174 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="58725e47-7476-44f5-ad7b-ccd74d87aace" containerName="neutron-httpd" Jan 28 10:02:03 crc kubenswrapper[4735]: E0128 10:02:03.860183 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a3f6e2-b56d-4a42-9505-fb4b86f60150" containerName="init" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.860190 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a3f6e2-b56d-4a42-9505-fb4b86f60150" containerName="init" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.860368 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="09a3f6e2-b56d-4a42-9505-fb4b86f60150" containerName="dnsmasq-dns" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.860387 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="58725e47-7476-44f5-ad7b-ccd74d87aace" containerName="neutron-httpd" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.860395 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="3735b551-2777-4a92-a82b-d99a12f9dba8" containerName="probe" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.860411 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="3735b551-2777-4a92-a82b-d99a12f9dba8" containerName="cinder-scheduler" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.860423 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="58725e47-7476-44f5-ad7b-ccd74d87aace" containerName="neutron-api" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.861453 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.869326 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.874454 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.875926 4735 scope.go:117] "RemoveContainer" containerID="ea5c76d6a48bb86a41f5690dc114c2dd5b5e6d43ab98c8ba5093528b5b67d6dc" Jan 28 10:02:03 crc kubenswrapper[4735]: E0128 10:02:03.876461 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea5c76d6a48bb86a41f5690dc114c2dd5b5e6d43ab98c8ba5093528b5b67d6dc\": container with ID starting with ea5c76d6a48bb86a41f5690dc114c2dd5b5e6d43ab98c8ba5093528b5b67d6dc not found: ID does not exist" containerID="ea5c76d6a48bb86a41f5690dc114c2dd5b5e6d43ab98c8ba5093528b5b67d6dc" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.876519 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea5c76d6a48bb86a41f5690dc114c2dd5b5e6d43ab98c8ba5093528b5b67d6dc"} err="failed to get container status \"ea5c76d6a48bb86a41f5690dc114c2dd5b5e6d43ab98c8ba5093528b5b67d6dc\": rpc error: code = NotFound desc = could not find container \"ea5c76d6a48bb86a41f5690dc114c2dd5b5e6d43ab98c8ba5093528b5b67d6dc\": container with ID starting with ea5c76d6a48bb86a41f5690dc114c2dd5b5e6d43ab98c8ba5093528b5b67d6dc not found: ID does not exist" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.876552 4735 scope.go:117] "RemoveContainer" containerID="baf3c183f2d5de161d5aeadd7978035e5678a17c4953ea99e5e299a3cc054c2d" Jan 28 10:02:03 crc kubenswrapper[4735]: E0128 10:02:03.877187 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baf3c183f2d5de161d5aeadd7978035e5678a17c4953ea99e5e299a3cc054c2d\": container with ID starting with baf3c183f2d5de161d5aeadd7978035e5678a17c4953ea99e5e299a3cc054c2d not found: ID does not exist" containerID="baf3c183f2d5de161d5aeadd7978035e5678a17c4953ea99e5e299a3cc054c2d" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.877236 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baf3c183f2d5de161d5aeadd7978035e5678a17c4953ea99e5e299a3cc054c2d"} err="failed to get container status \"baf3c183f2d5de161d5aeadd7978035e5678a17c4953ea99e5e299a3cc054c2d\": rpc error: code = NotFound desc = could not find container \"baf3c183f2d5de161d5aeadd7978035e5678a17c4953ea99e5e299a3cc054c2d\": container with ID starting with baf3c183f2d5de161d5aeadd7978035e5678a17c4953ea99e5e299a3cc054c2d not found: ID does not exist" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.881052 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3735b551-2777-4a92-a82b-d99a12f9dba8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.982525 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f68a4b62-3797-4888-b052-e4da32bc4dab-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.982606 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f68a4b62-3797-4888-b052-e4da32bc4dab-config-data\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.982642 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68a4b62-3797-4888-b052-e4da32bc4dab-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.982685 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f68a4b62-3797-4888-b052-e4da32bc4dab-scripts\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.983359 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f68a4b62-3797-4888-b052-e4da32bc4dab-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:03 crc kubenswrapper[4735]: I0128 10:02:03.983429 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncmmf\" (UniqueName: \"kubernetes.io/projected/f68a4b62-3797-4888-b052-e4da32bc4dab-kube-api-access-ncmmf\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:04 crc kubenswrapper[4735]: I0128 10:02:04.035354 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:02:04 crc kubenswrapper[4735]: I0128 10:02:04.086157 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f68a4b62-3797-4888-b052-e4da32bc4dab-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:04 crc kubenswrapper[4735]: I0128 10:02:04.086212 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f68a4b62-3797-4888-b052-e4da32bc4dab-config-data\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:04 crc kubenswrapper[4735]: I0128 10:02:04.086263 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68a4b62-3797-4888-b052-e4da32bc4dab-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:04 crc kubenswrapper[4735]: I0128 10:02:04.086276 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f68a4b62-3797-4888-b052-e4da32bc4dab-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:04 crc kubenswrapper[4735]: I0128 10:02:04.086317 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f68a4b62-3797-4888-b052-e4da32bc4dab-scripts\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:04 crc kubenswrapper[4735]: I0128 10:02:04.086443 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f68a4b62-3797-4888-b052-e4da32bc4dab-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:04 crc kubenswrapper[4735]: I0128 10:02:04.086505 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncmmf\" (UniqueName: \"kubernetes.io/projected/f68a4b62-3797-4888-b052-e4da32bc4dab-kube-api-access-ncmmf\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:04 crc kubenswrapper[4735]: I0128 10:02:04.101409 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f68a4b62-3797-4888-b052-e4da32bc4dab-scripts\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:04 crc kubenswrapper[4735]: I0128 10:02:04.102135 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68a4b62-3797-4888-b052-e4da32bc4dab-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:04 crc kubenswrapper[4735]: I0128 10:02:04.106065 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f68a4b62-3797-4888-b052-e4da32bc4dab-config-data\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:04 crc kubenswrapper[4735]: I0128 10:02:04.106878 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f68a4b62-3797-4888-b052-e4da32bc4dab-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:04 crc kubenswrapper[4735]: I0128 10:02:04.109505 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncmmf\" (UniqueName: \"kubernetes.io/projected/f68a4b62-3797-4888-b052-e4da32bc4dab-kube-api-access-ncmmf\") pod \"cinder-scheduler-0\" (UID: \"f68a4b62-3797-4888-b052-e4da32bc4dab\") " pod="openstack/cinder-scheduler-0" Jan 28 10:02:04 crc kubenswrapper[4735]: I0128 10:02:04.157740 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7bbdbbfb48-tj5mr" Jan 28 10:02:04 crc kubenswrapper[4735]: I0128 10:02:04.189149 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 10:02:04 crc kubenswrapper[4735]: I0128 10:02:04.755802 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 10:02:04 crc kubenswrapper[4735]: W0128 10:02:04.758813 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf68a4b62_3797_4888_b052_e4da32bc4dab.slice/crio-50ea6ff4441aacab8509dfe3da4d8e2313754938fa6f98f40d10db38b1b3b295 WatchSource:0}: Error finding container 50ea6ff4441aacab8509dfe3da4d8e2313754938fa6f98f40d10db38b1b3b295: Status 404 returned error can't find the container with id 50ea6ff4441aacab8509dfe3da4d8e2313754938fa6f98f40d10db38b1b3b295 Jan 28 10:02:04 crc kubenswrapper[4735]: I0128 10:02:04.811900 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f68a4b62-3797-4888-b052-e4da32bc4dab","Type":"ContainerStarted","Data":"50ea6ff4441aacab8509dfe3da4d8e2313754938fa6f98f40d10db38b1b3b295"} Jan 28 10:02:05 crc kubenswrapper[4735]: I0128 10:02:05.431165 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:02:05 crc kubenswrapper[4735]: I0128 10:02:05.431441 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:02:05 crc kubenswrapper[4735]: I0128 10:02:05.431484 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 10:02:05 crc kubenswrapper[4735]: I0128 10:02:05.432135 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4a04c510b9b270008568de903a447cbfe28b5ec6b75521aed00c265673e243e"} pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 10:02:05 crc kubenswrapper[4735]: I0128 10:02:05.432185 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" containerID="cri-o://e4a04c510b9b270008568de903a447cbfe28b5ec6b75521aed00c265673e243e" gracePeriod=600 Jan 28 10:02:05 crc kubenswrapper[4735]: I0128 10:02:05.522195 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3735b551-2777-4a92-a82b-d99a12f9dba8" path="/var/lib/kubelet/pods/3735b551-2777-4a92-a82b-d99a12f9dba8/volumes" Jan 28 10:02:05 crc kubenswrapper[4735]: I0128 10:02:05.838588 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f68a4b62-3797-4888-b052-e4da32bc4dab","Type":"ContainerStarted","Data":"25adb3f5beb82cfff1520ccd5296d6c21b65e20eea7cc1b1ee130db06f70f084"} Jan 28 10:02:05 crc kubenswrapper[4735]: I0128 10:02:05.847941 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerID="e4a04c510b9b270008568de903a447cbfe28b5ec6b75521aed00c265673e243e" exitCode=0 Jan 28 10:02:05 crc kubenswrapper[4735]: I0128 10:02:05.847988 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerDied","Data":"e4a04c510b9b270008568de903a447cbfe28b5ec6b75521aed00c265673e243e"} Jan 28 10:02:05 crc kubenswrapper[4735]: I0128 10:02:05.848024 4735 scope.go:117] "RemoveContainer" containerID="426bd6318f22bcaa2e01148fa51d1f918d9825a66fecb193aef53ed2e7aff201" Jan 28 10:02:06 crc kubenswrapper[4735]: I0128 10:02:06.554677 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 28 10:02:06 crc kubenswrapper[4735]: I0128 10:02:06.865739 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"c1c45816a1405ea6213a313d884201ef5ccf1af72f7151ce23dc0ce396519b28"} Jan 28 10:02:06 crc kubenswrapper[4735]: I0128 10:02:06.881914 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f68a4b62-3797-4888-b052-e4da32bc4dab","Type":"ContainerStarted","Data":"e0185a12598be89917217bb5105f3bfd45b137604aee778929fbd6ace7735075"} Jan 28 10:02:06 crc kubenswrapper[4735]: I0128 10:02:06.906922 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.906906551 podStartE2EDuration="3.906906551s" podCreationTimestamp="2026-01-28 10:02:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:02:06.901822114 +0000 UTC m=+1180.051486507" watchObservedRunningTime="2026-01-28 10:02:06.906906551 +0000 UTC m=+1180.056570924" Jan 28 10:02:07 crc kubenswrapper[4735]: I0128 10:02:07.111787 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:02:07 crc kubenswrapper[4735]: I0128 10:02:07.175445 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f67d6d798-h9vrm" Jan 28 10:02:07 crc kubenswrapper[4735]: I0128 10:02:07.230383 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-84dcc47884-72t49"] Jan 28 10:02:07 crc kubenswrapper[4735]: I0128 10:02:07.230617 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-84dcc47884-72t49" podUID="28705939-381c-428c-8b71-1cab75747dc3" containerName="barbican-api-log" containerID="cri-o://99f414f9ffc9be16ac74924d33c4ed5ff0b9de076a5fa941301d456020d03568" gracePeriod=30 Jan 28 10:02:07 crc kubenswrapper[4735]: I0128 10:02:07.230693 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-84dcc47884-72t49" podUID="28705939-381c-428c-8b71-1cab75747dc3" containerName="barbican-api" containerID="cri-o://c3b0479d66f581ff80871df5f49f894e8b2dbbee9103a614b9cbc384aedaa1c7" gracePeriod=30 Jan 28 10:02:07 crc kubenswrapper[4735]: I0128 10:02:07.894417 4735 generic.go:334] "Generic (PLEG): container finished" podID="28705939-381c-428c-8b71-1cab75747dc3" containerID="99f414f9ffc9be16ac74924d33c4ed5ff0b9de076a5fa941301d456020d03568" exitCode=143 Jan 28 10:02:07 crc kubenswrapper[4735]: I0128 10:02:07.894481 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84dcc47884-72t49" event={"ID":"28705939-381c-428c-8b71-1cab75747dc3","Type":"ContainerDied","Data":"99f414f9ffc9be16ac74924d33c4ed5ff0b9de076a5fa941301d456020d03568"} Jan 28 10:02:09 crc kubenswrapper[4735]: I0128 10:02:09.189886 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 10:02:10 crc kubenswrapper[4735]: I0128 10:02:10.160576 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-69fccdbfc9-k2hlg" Jan 28 10:02:10 crc kubenswrapper[4735]: I0128 10:02:10.508887 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84dcc47884-72t49" podUID="28705939-381c-428c-8b71-1cab75747dc3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:54482->10.217.0.165:9311: read: connection reset by peer" Jan 28 10:02:10 crc kubenswrapper[4735]: I0128 10:02:10.509422 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84dcc47884-72t49" podUID="28705939-381c-428c-8b71-1cab75747dc3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:54490->10.217.0.165:9311: read: connection reset by peer" Jan 28 10:02:10 crc kubenswrapper[4735]: I0128 10:02:10.927583 4735 generic.go:334] "Generic (PLEG): container finished" podID="28705939-381c-428c-8b71-1cab75747dc3" containerID="c3b0479d66f581ff80871df5f49f894e8b2dbbee9103a614b9cbc384aedaa1c7" exitCode=0 Jan 28 10:02:10 crc kubenswrapper[4735]: I0128 10:02:10.927679 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84dcc47884-72t49" event={"ID":"28705939-381c-428c-8b71-1cab75747dc3","Type":"ContainerDied","Data":"c3b0479d66f581ff80871df5f49f894e8b2dbbee9103a614b9cbc384aedaa1c7"} Jan 28 10:02:10 crc kubenswrapper[4735]: I0128 10:02:10.927845 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84dcc47884-72t49" event={"ID":"28705939-381c-428c-8b71-1cab75747dc3","Type":"ContainerDied","Data":"ba791be2d2e0a20be70d31758af606c2292d4bbd22b37e775f8a94669a61a9c6"} Jan 28 10:02:10 crc kubenswrapper[4735]: I0128 10:02:10.927862 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba791be2d2e0a20be70d31758af606c2292d4bbd22b37e775f8a94669a61a9c6" Jan 28 10:02:10 crc kubenswrapper[4735]: I0128 10:02:10.935548 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.012867 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28705939-381c-428c-8b71-1cab75747dc3-logs\") pod \"28705939-381c-428c-8b71-1cab75747dc3\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.012947 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-combined-ca-bundle\") pod \"28705939-381c-428c-8b71-1cab75747dc3\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.013059 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-config-data\") pod \"28705939-381c-428c-8b71-1cab75747dc3\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.013156 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wdk6\" (UniqueName: \"kubernetes.io/projected/28705939-381c-428c-8b71-1cab75747dc3-kube-api-access-8wdk6\") pod \"28705939-381c-428c-8b71-1cab75747dc3\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.013396 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-config-data-custom\") pod \"28705939-381c-428c-8b71-1cab75747dc3\" (UID: \"28705939-381c-428c-8b71-1cab75747dc3\") " Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.013536 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28705939-381c-428c-8b71-1cab75747dc3-logs" (OuterVolumeSpecName: "logs") pod "28705939-381c-428c-8b71-1cab75747dc3" (UID: "28705939-381c-428c-8b71-1cab75747dc3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.014008 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28705939-381c-428c-8b71-1cab75747dc3-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.019389 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28705939-381c-428c-8b71-1cab75747dc3-kube-api-access-8wdk6" (OuterVolumeSpecName: "kube-api-access-8wdk6") pod "28705939-381c-428c-8b71-1cab75747dc3" (UID: "28705939-381c-428c-8b71-1cab75747dc3"). InnerVolumeSpecName "kube-api-access-8wdk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.028391 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "28705939-381c-428c-8b71-1cab75747dc3" (UID: "28705939-381c-428c-8b71-1cab75747dc3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.058509 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28705939-381c-428c-8b71-1cab75747dc3" (UID: "28705939-381c-428c-8b71-1cab75747dc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.081797 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-config-data" (OuterVolumeSpecName: "config-data") pod "28705939-381c-428c-8b71-1cab75747dc3" (UID: "28705939-381c-428c-8b71-1cab75747dc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.115402 4735 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.115606 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.115665 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28705939-381c-428c-8b71-1cab75747dc3-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.115726 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wdk6\" (UniqueName: \"kubernetes.io/projected/28705939-381c-428c-8b71-1cab75747dc3-kube-api-access-8wdk6\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.937060 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84dcc47884-72t49" Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.960936 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-84dcc47884-72t49"] Jan 28 10:02:11 crc kubenswrapper[4735]: I0128 10:02:11.969104 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-84dcc47884-72t49"] Jan 28 10:02:12 crc kubenswrapper[4735]: I0128 10:02:12.397101 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-bdcb89768-499m5" podUID="160ae085-44c0-4cef-9854-b0686547b43c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.518376 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28705939-381c-428c-8b71-1cab75747dc3" path="/var/lib/kubelet/pods/28705939-381c-428c-8b71-1cab75747dc3/volumes" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.797431 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-79b9899549-8qxf5"] Jan 28 10:02:13 crc kubenswrapper[4735]: E0128 10:02:13.798124 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28705939-381c-428c-8b71-1cab75747dc3" containerName="barbican-api" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.798227 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="28705939-381c-428c-8b71-1cab75747dc3" containerName="barbican-api" Jan 28 10:02:13 crc kubenswrapper[4735]: E0128 10:02:13.798317 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28705939-381c-428c-8b71-1cab75747dc3" containerName="barbican-api-log" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.798383 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="28705939-381c-428c-8b71-1cab75747dc3" containerName="barbican-api-log" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.798661 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="28705939-381c-428c-8b71-1cab75747dc3" containerName="barbican-api-log" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.798800 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="28705939-381c-428c-8b71-1cab75747dc3" containerName="barbican-api" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.800032 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.803646 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.804252 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.804683 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.818054 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-79b9899549-8qxf5"] Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.864704 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/681a40ab-fe34-4825-8336-3b8048fc04b7-log-httpd\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.864846 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/681a40ab-fe34-4825-8336-3b8048fc04b7-etc-swift\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.864911 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/681a40ab-fe34-4825-8336-3b8048fc04b7-run-httpd\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.864940 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzd7k\" (UniqueName: \"kubernetes.io/projected/681a40ab-fe34-4825-8336-3b8048fc04b7-kube-api-access-xzd7k\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.864961 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/681a40ab-fe34-4825-8336-3b8048fc04b7-config-data\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.864996 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/681a40ab-fe34-4825-8336-3b8048fc04b7-combined-ca-bundle\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.865025 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/681a40ab-fe34-4825-8336-3b8048fc04b7-internal-tls-certs\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.865072 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/681a40ab-fe34-4825-8336-3b8048fc04b7-public-tls-certs\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.966946 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/681a40ab-fe34-4825-8336-3b8048fc04b7-internal-tls-certs\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.967015 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/681a40ab-fe34-4825-8336-3b8048fc04b7-public-tls-certs\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.967068 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/681a40ab-fe34-4825-8336-3b8048fc04b7-log-httpd\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.967119 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/681a40ab-fe34-4825-8336-3b8048fc04b7-etc-swift\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.967161 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/681a40ab-fe34-4825-8336-3b8048fc04b7-run-httpd\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.967180 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzd7k\" (UniqueName: \"kubernetes.io/projected/681a40ab-fe34-4825-8336-3b8048fc04b7-kube-api-access-xzd7k\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.967198 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/681a40ab-fe34-4825-8336-3b8048fc04b7-config-data\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.967220 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/681a40ab-fe34-4825-8336-3b8048fc04b7-combined-ca-bundle\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.967952 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/681a40ab-fe34-4825-8336-3b8048fc04b7-run-httpd\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.968038 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/681a40ab-fe34-4825-8336-3b8048fc04b7-log-httpd\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.975104 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/681a40ab-fe34-4825-8336-3b8048fc04b7-etc-swift\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.975493 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/681a40ab-fe34-4825-8336-3b8048fc04b7-combined-ca-bundle\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.975546 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/681a40ab-fe34-4825-8336-3b8048fc04b7-public-tls-certs\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.978654 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/681a40ab-fe34-4825-8336-3b8048fc04b7-config-data\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.982333 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/681a40ab-fe34-4825-8336-3b8048fc04b7-internal-tls-certs\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:13 crc kubenswrapper[4735]: I0128 10:02:13.999509 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzd7k\" (UniqueName: \"kubernetes.io/projected/681a40ab-fe34-4825-8336-3b8048fc04b7-kube-api-access-xzd7k\") pod \"swift-proxy-79b9899549-8qxf5\" (UID: \"681a40ab-fe34-4825-8336-3b8048fc04b7\") " pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:14 crc kubenswrapper[4735]: I0128 10:02:14.121254 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:14 crc kubenswrapper[4735]: I0128 10:02:14.470319 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 10:02:14 crc kubenswrapper[4735]: I0128 10:02:14.721758 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-79b9899549-8qxf5"] Jan 28 10:02:14 crc kubenswrapper[4735]: W0128 10:02:14.724179 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod681a40ab_fe34_4825_8336_3b8048fc04b7.slice/crio-d92693ec0240bd5a0c31dd6ad1683c172b534d755e7d6c481a0447b14999732a WatchSource:0}: Error finding container d92693ec0240bd5a0c31dd6ad1683c172b534d755e7d6c481a0447b14999732a: Status 404 returned error can't find the container with id d92693ec0240bd5a0c31dd6ad1683c172b534d755e7d6c481a0447b14999732a Jan 28 10:02:14 crc kubenswrapper[4735]: I0128 10:02:14.955306 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 28 10:02:14 crc kubenswrapper[4735]: I0128 10:02:14.957641 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 10:02:14 crc kubenswrapper[4735]: I0128 10:02:14.959864 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 28 10:02:14 crc kubenswrapper[4735]: I0128 10:02:14.960368 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 28 10:02:14 crc kubenswrapper[4735]: I0128 10:02:14.962640 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 10:02:14 crc kubenswrapper[4735]: I0128 10:02:14.963902 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-dvthg" Jan 28 10:02:14 crc kubenswrapper[4735]: I0128 10:02:14.992302 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-79b9899549-8qxf5" event={"ID":"681a40ab-fe34-4825-8336-3b8048fc04b7","Type":"ContainerStarted","Data":"cb20b63bbc31c4d946754932efd41a52081620ba188f16b8553b936d79a3e076"} Jan 28 10:02:14 crc kubenswrapper[4735]: I0128 10:02:14.992356 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-79b9899549-8qxf5" event={"ID":"681a40ab-fe34-4825-8336-3b8048fc04b7","Type":"ContainerStarted","Data":"d92693ec0240bd5a0c31dd6ad1683c172b534d755e7d6c481a0447b14999732a"} Jan 28 10:02:15 crc kubenswrapper[4735]: I0128 10:02:15.089700 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffj5m\" (UniqueName: \"kubernetes.io/projected/9360da99-a521-416d-868b-69526589a422-kube-api-access-ffj5m\") pod \"openstackclient\" (UID: \"9360da99-a521-416d-868b-69526589a422\") " pod="openstack/openstackclient" Jan 28 10:02:15 crc kubenswrapper[4735]: I0128 10:02:15.090259 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9360da99-a521-416d-868b-69526589a422-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9360da99-a521-416d-868b-69526589a422\") " pod="openstack/openstackclient" Jan 28 10:02:15 crc kubenswrapper[4735]: I0128 10:02:15.090437 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9360da99-a521-416d-868b-69526589a422-openstack-config\") pod \"openstackclient\" (UID: \"9360da99-a521-416d-868b-69526589a422\") " pod="openstack/openstackclient" Jan 28 10:02:15 crc kubenswrapper[4735]: I0128 10:02:15.090524 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9360da99-a521-416d-868b-69526589a422-openstack-config-secret\") pod \"openstackclient\" (UID: \"9360da99-a521-416d-868b-69526589a422\") " pod="openstack/openstackclient" Jan 28 10:02:15 crc kubenswrapper[4735]: I0128 10:02:15.191827 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9360da99-a521-416d-868b-69526589a422-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9360da99-a521-416d-868b-69526589a422\") " pod="openstack/openstackclient" Jan 28 10:02:15 crc kubenswrapper[4735]: I0128 10:02:15.191929 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9360da99-a521-416d-868b-69526589a422-openstack-config\") pod \"openstackclient\" (UID: \"9360da99-a521-416d-868b-69526589a422\") " pod="openstack/openstackclient" Jan 28 10:02:15 crc kubenswrapper[4735]: I0128 10:02:15.191948 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9360da99-a521-416d-868b-69526589a422-openstack-config-secret\") pod \"openstackclient\" (UID: \"9360da99-a521-416d-868b-69526589a422\") " pod="openstack/openstackclient" Jan 28 10:02:15 crc kubenswrapper[4735]: I0128 10:02:15.191994 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffj5m\" (UniqueName: \"kubernetes.io/projected/9360da99-a521-416d-868b-69526589a422-kube-api-access-ffj5m\") pod \"openstackclient\" (UID: \"9360da99-a521-416d-868b-69526589a422\") " pod="openstack/openstackclient" Jan 28 10:02:15 crc kubenswrapper[4735]: I0128 10:02:15.193011 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9360da99-a521-416d-868b-69526589a422-openstack-config\") pod \"openstackclient\" (UID: \"9360da99-a521-416d-868b-69526589a422\") " pod="openstack/openstackclient" Jan 28 10:02:15 crc kubenswrapper[4735]: I0128 10:02:15.196710 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9360da99-a521-416d-868b-69526589a422-openstack-config-secret\") pod \"openstackclient\" (UID: \"9360da99-a521-416d-868b-69526589a422\") " pod="openstack/openstackclient" Jan 28 10:02:15 crc kubenswrapper[4735]: I0128 10:02:15.197095 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9360da99-a521-416d-868b-69526589a422-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9360da99-a521-416d-868b-69526589a422\") " pod="openstack/openstackclient" Jan 28 10:02:15 crc kubenswrapper[4735]: I0128 10:02:15.215389 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffj5m\" (UniqueName: \"kubernetes.io/projected/9360da99-a521-416d-868b-69526589a422-kube-api-access-ffj5m\") pod \"openstackclient\" (UID: \"9360da99-a521-416d-868b-69526589a422\") " pod="openstack/openstackclient" Jan 28 10:02:15 crc kubenswrapper[4735]: I0128 10:02:15.375406 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 10:02:15 crc kubenswrapper[4735]: I0128 10:02:15.855214 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 10:02:15 crc kubenswrapper[4735]: W0128 10:02:15.856600 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9360da99_a521_416d_868b_69526589a422.slice/crio-5c45fb53a9357c65f23f406b1cf91bbdac5de1f05974016f21efe008df412c15 WatchSource:0}: Error finding container 5c45fb53a9357c65f23f406b1cf91bbdac5de1f05974016f21efe008df412c15: Status 404 returned error can't find the container with id 5c45fb53a9357c65f23f406b1cf91bbdac5de1f05974016f21efe008df412c15 Jan 28 10:02:16 crc kubenswrapper[4735]: I0128 10:02:16.000597 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9360da99-a521-416d-868b-69526589a422","Type":"ContainerStarted","Data":"5c45fb53a9357c65f23f406b1cf91bbdac5de1f05974016f21efe008df412c15"} Jan 28 10:02:16 crc kubenswrapper[4735]: I0128 10:02:16.002598 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-79b9899549-8qxf5" event={"ID":"681a40ab-fe34-4825-8336-3b8048fc04b7","Type":"ContainerStarted","Data":"a21211d263fb9de4b49bcf60e9c4d97dcfc5013256b46caafa2f4eb281c4d69f"} Jan 28 10:02:16 crc kubenswrapper[4735]: I0128 10:02:16.003972 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:16 crc kubenswrapper[4735]: I0128 10:02:16.004007 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:16 crc kubenswrapper[4735]: I0128 10:02:16.113068 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-79b9899549-8qxf5" podStartSLOduration=3.11304765 podStartE2EDuration="3.11304765s" podCreationTimestamp="2026-01-28 10:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:02:16.024959842 +0000 UTC m=+1189.174624215" watchObservedRunningTime="2026-01-28 10:02:16.11304765 +0000 UTC m=+1189.262712033" Jan 28 10:02:16 crc kubenswrapper[4735]: I0128 10:02:16.129145 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:16 crc kubenswrapper[4735]: I0128 10:02:16.129484 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="596f1b95-b8a4-4444-a984-a26a031f595f" containerName="ceilometer-central-agent" containerID="cri-o://ea7e5355d48de3dd3bcce24663dcc55f3d37fd900bcf436eef7e37bf90ae3d5f" gracePeriod=30 Jan 28 10:02:16 crc kubenswrapper[4735]: I0128 10:02:16.129571 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="596f1b95-b8a4-4444-a984-a26a031f595f" containerName="sg-core" containerID="cri-o://1852a9c62fabef7bae41aced4696c3c478016b4841b1977083d92206389863c9" gracePeriod=30 Jan 28 10:02:16 crc kubenswrapper[4735]: I0128 10:02:16.129571 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="596f1b95-b8a4-4444-a984-a26a031f595f" containerName="ceilometer-notification-agent" containerID="cri-o://db7ad47da7352b46ea621b1d70414da77d31a0951a5218d75192861cdbb8a649" gracePeriod=30 Jan 28 10:02:16 crc kubenswrapper[4735]: I0128 10:02:16.129599 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="596f1b95-b8a4-4444-a984-a26a031f595f" containerName="proxy-httpd" containerID="cri-o://33a8c175b16d39a6034160f0ff9b7d51331fd54ca21e5c11c3695e5813b98a6a" gracePeriod=30 Jan 28 10:02:16 crc kubenswrapper[4735]: I0128 10:02:16.134544 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 10:02:17 crc kubenswrapper[4735]: I0128 10:02:17.052645 4735 generic.go:334] "Generic (PLEG): container finished" podID="596f1b95-b8a4-4444-a984-a26a031f595f" containerID="33a8c175b16d39a6034160f0ff9b7d51331fd54ca21e5c11c3695e5813b98a6a" exitCode=0 Jan 28 10:02:17 crc kubenswrapper[4735]: I0128 10:02:17.053006 4735 generic.go:334] "Generic (PLEG): container finished" podID="596f1b95-b8a4-4444-a984-a26a031f595f" containerID="1852a9c62fabef7bae41aced4696c3c478016b4841b1977083d92206389863c9" exitCode=2 Jan 28 10:02:17 crc kubenswrapper[4735]: I0128 10:02:17.053021 4735 generic.go:334] "Generic (PLEG): container finished" podID="596f1b95-b8a4-4444-a984-a26a031f595f" containerID="ea7e5355d48de3dd3bcce24663dcc55f3d37fd900bcf436eef7e37bf90ae3d5f" exitCode=0 Jan 28 10:02:17 crc kubenswrapper[4735]: I0128 10:02:17.053995 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"596f1b95-b8a4-4444-a984-a26a031f595f","Type":"ContainerDied","Data":"33a8c175b16d39a6034160f0ff9b7d51331fd54ca21e5c11c3695e5813b98a6a"} Jan 28 10:02:17 crc kubenswrapper[4735]: I0128 10:02:17.054033 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"596f1b95-b8a4-4444-a984-a26a031f595f","Type":"ContainerDied","Data":"1852a9c62fabef7bae41aced4696c3c478016b4841b1977083d92206389863c9"} Jan 28 10:02:17 crc kubenswrapper[4735]: I0128 10:02:17.054046 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"596f1b95-b8a4-4444-a984-a26a031f595f","Type":"ContainerDied","Data":"ea7e5355d48de3dd3bcce24663dcc55f3d37fd900bcf436eef7e37bf90ae3d5f"} Jan 28 10:02:17 crc kubenswrapper[4735]: I0128 10:02:17.934385 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.048354 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-scripts\") pod \"596f1b95-b8a4-4444-a984-a26a031f595f\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.048651 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-sg-core-conf-yaml\") pod \"596f1b95-b8a4-4444-a984-a26a031f595f\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.048672 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-combined-ca-bundle\") pod \"596f1b95-b8a4-4444-a984-a26a031f595f\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.048741 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/596f1b95-b8a4-4444-a984-a26a031f595f-run-httpd\") pod \"596f1b95-b8a4-4444-a984-a26a031f595f\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.048819 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/596f1b95-b8a4-4444-a984-a26a031f595f-log-httpd\") pod \"596f1b95-b8a4-4444-a984-a26a031f595f\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.048882 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drzk5\" (UniqueName: \"kubernetes.io/projected/596f1b95-b8a4-4444-a984-a26a031f595f-kube-api-access-drzk5\") pod \"596f1b95-b8a4-4444-a984-a26a031f595f\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.048959 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-config-data\") pod \"596f1b95-b8a4-4444-a984-a26a031f595f\" (UID: \"596f1b95-b8a4-4444-a984-a26a031f595f\") " Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.049358 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/596f1b95-b8a4-4444-a984-a26a031f595f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "596f1b95-b8a4-4444-a984-a26a031f595f" (UID: "596f1b95-b8a4-4444-a984-a26a031f595f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.049553 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/596f1b95-b8a4-4444-a984-a26a031f595f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "596f1b95-b8a4-4444-a984-a26a031f595f" (UID: "596f1b95-b8a4-4444-a984-a26a031f595f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.050081 4735 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/596f1b95-b8a4-4444-a984-a26a031f595f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.050099 4735 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/596f1b95-b8a4-4444-a984-a26a031f595f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.054338 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-scripts" (OuterVolumeSpecName: "scripts") pod "596f1b95-b8a4-4444-a984-a26a031f595f" (UID: "596f1b95-b8a4-4444-a984-a26a031f595f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.071322 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/596f1b95-b8a4-4444-a984-a26a031f595f-kube-api-access-drzk5" (OuterVolumeSpecName: "kube-api-access-drzk5") pod "596f1b95-b8a4-4444-a984-a26a031f595f" (UID: "596f1b95-b8a4-4444-a984-a26a031f595f"). InnerVolumeSpecName "kube-api-access-drzk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.082156 4735 generic.go:334] "Generic (PLEG): container finished" podID="596f1b95-b8a4-4444-a984-a26a031f595f" containerID="db7ad47da7352b46ea621b1d70414da77d31a0951a5218d75192861cdbb8a649" exitCode=0 Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.083155 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.083579 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"596f1b95-b8a4-4444-a984-a26a031f595f","Type":"ContainerDied","Data":"db7ad47da7352b46ea621b1d70414da77d31a0951a5218d75192861cdbb8a649"} Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.083618 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"596f1b95-b8a4-4444-a984-a26a031f595f","Type":"ContainerDied","Data":"e4a879d51bc6d8247dd499442cdd902b604dae97dbbecb3d464313eabffce65e"} Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.083641 4735 scope.go:117] "RemoveContainer" containerID="33a8c175b16d39a6034160f0ff9b7d51331fd54ca21e5c11c3695e5813b98a6a" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.086833 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "596f1b95-b8a4-4444-a984-a26a031f595f" (UID: "596f1b95-b8a4-4444-a984-a26a031f595f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.135645 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "596f1b95-b8a4-4444-a984-a26a031f595f" (UID: "596f1b95-b8a4-4444-a984-a26a031f595f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.147205 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-config-data" (OuterVolumeSpecName: "config-data") pod "596f1b95-b8a4-4444-a984-a26a031f595f" (UID: "596f1b95-b8a4-4444-a984-a26a031f595f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.152014 4735 scope.go:117] "RemoveContainer" containerID="1852a9c62fabef7bae41aced4696c3c478016b4841b1977083d92206389863c9" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.152095 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drzk5\" (UniqueName: \"kubernetes.io/projected/596f1b95-b8a4-4444-a984-a26a031f595f-kube-api-access-drzk5\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.152129 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.152141 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.152154 4735 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.152166 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596f1b95-b8a4-4444-a984-a26a031f595f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.171292 4735 scope.go:117] "RemoveContainer" containerID="db7ad47da7352b46ea621b1d70414da77d31a0951a5218d75192861cdbb8a649" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.188736 4735 scope.go:117] "RemoveContainer" containerID="ea7e5355d48de3dd3bcce24663dcc55f3d37fd900bcf436eef7e37bf90ae3d5f" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.221630 4735 scope.go:117] "RemoveContainer" containerID="33a8c175b16d39a6034160f0ff9b7d51331fd54ca21e5c11c3695e5813b98a6a" Jan 28 10:02:18 crc kubenswrapper[4735]: E0128 10:02:18.222485 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33a8c175b16d39a6034160f0ff9b7d51331fd54ca21e5c11c3695e5813b98a6a\": container with ID starting with 33a8c175b16d39a6034160f0ff9b7d51331fd54ca21e5c11c3695e5813b98a6a not found: ID does not exist" containerID="33a8c175b16d39a6034160f0ff9b7d51331fd54ca21e5c11c3695e5813b98a6a" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.222531 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a8c175b16d39a6034160f0ff9b7d51331fd54ca21e5c11c3695e5813b98a6a"} err="failed to get container status \"33a8c175b16d39a6034160f0ff9b7d51331fd54ca21e5c11c3695e5813b98a6a\": rpc error: code = NotFound desc = could not find container \"33a8c175b16d39a6034160f0ff9b7d51331fd54ca21e5c11c3695e5813b98a6a\": container with ID starting with 33a8c175b16d39a6034160f0ff9b7d51331fd54ca21e5c11c3695e5813b98a6a not found: ID does not exist" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.222560 4735 scope.go:117] "RemoveContainer" containerID="1852a9c62fabef7bae41aced4696c3c478016b4841b1977083d92206389863c9" Jan 28 10:02:18 crc kubenswrapper[4735]: E0128 10:02:18.223057 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1852a9c62fabef7bae41aced4696c3c478016b4841b1977083d92206389863c9\": container with ID starting with 1852a9c62fabef7bae41aced4696c3c478016b4841b1977083d92206389863c9 not found: ID does not exist" containerID="1852a9c62fabef7bae41aced4696c3c478016b4841b1977083d92206389863c9" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.223092 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1852a9c62fabef7bae41aced4696c3c478016b4841b1977083d92206389863c9"} err="failed to get container status \"1852a9c62fabef7bae41aced4696c3c478016b4841b1977083d92206389863c9\": rpc error: code = NotFound desc = could not find container \"1852a9c62fabef7bae41aced4696c3c478016b4841b1977083d92206389863c9\": container with ID starting with 1852a9c62fabef7bae41aced4696c3c478016b4841b1977083d92206389863c9 not found: ID does not exist" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.223114 4735 scope.go:117] "RemoveContainer" containerID="db7ad47da7352b46ea621b1d70414da77d31a0951a5218d75192861cdbb8a649" Jan 28 10:02:18 crc kubenswrapper[4735]: E0128 10:02:18.223502 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db7ad47da7352b46ea621b1d70414da77d31a0951a5218d75192861cdbb8a649\": container with ID starting with db7ad47da7352b46ea621b1d70414da77d31a0951a5218d75192861cdbb8a649 not found: ID does not exist" containerID="db7ad47da7352b46ea621b1d70414da77d31a0951a5218d75192861cdbb8a649" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.223527 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db7ad47da7352b46ea621b1d70414da77d31a0951a5218d75192861cdbb8a649"} err="failed to get container status \"db7ad47da7352b46ea621b1d70414da77d31a0951a5218d75192861cdbb8a649\": rpc error: code = NotFound desc = could not find container \"db7ad47da7352b46ea621b1d70414da77d31a0951a5218d75192861cdbb8a649\": container with ID starting with db7ad47da7352b46ea621b1d70414da77d31a0951a5218d75192861cdbb8a649 not found: ID does not exist" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.223540 4735 scope.go:117] "RemoveContainer" containerID="ea7e5355d48de3dd3bcce24663dcc55f3d37fd900bcf436eef7e37bf90ae3d5f" Jan 28 10:02:18 crc kubenswrapper[4735]: E0128 10:02:18.223961 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea7e5355d48de3dd3bcce24663dcc55f3d37fd900bcf436eef7e37bf90ae3d5f\": container with ID starting with ea7e5355d48de3dd3bcce24663dcc55f3d37fd900bcf436eef7e37bf90ae3d5f not found: ID does not exist" containerID="ea7e5355d48de3dd3bcce24663dcc55f3d37fd900bcf436eef7e37bf90ae3d5f" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.223984 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea7e5355d48de3dd3bcce24663dcc55f3d37fd900bcf436eef7e37bf90ae3d5f"} err="failed to get container status \"ea7e5355d48de3dd3bcce24663dcc55f3d37fd900bcf436eef7e37bf90ae3d5f\": rpc error: code = NotFound desc = could not find container \"ea7e5355d48de3dd3bcce24663dcc55f3d37fd900bcf436eef7e37bf90ae3d5f\": container with ID starting with ea7e5355d48de3dd3bcce24663dcc55f3d37fd900bcf436eef7e37bf90ae3d5f not found: ID does not exist" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.428825 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.440931 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.455598 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:18 crc kubenswrapper[4735]: E0128 10:02:18.455965 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="596f1b95-b8a4-4444-a984-a26a031f595f" containerName="proxy-httpd" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.455983 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="596f1b95-b8a4-4444-a984-a26a031f595f" containerName="proxy-httpd" Jan 28 10:02:18 crc kubenswrapper[4735]: E0128 10:02:18.455997 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="596f1b95-b8a4-4444-a984-a26a031f595f" containerName="ceilometer-central-agent" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.456004 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="596f1b95-b8a4-4444-a984-a26a031f595f" containerName="ceilometer-central-agent" Jan 28 10:02:18 crc kubenswrapper[4735]: E0128 10:02:18.456025 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="596f1b95-b8a4-4444-a984-a26a031f595f" containerName="ceilometer-notification-agent" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.456031 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="596f1b95-b8a4-4444-a984-a26a031f595f" containerName="ceilometer-notification-agent" Jan 28 10:02:18 crc kubenswrapper[4735]: E0128 10:02:18.456040 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="596f1b95-b8a4-4444-a984-a26a031f595f" containerName="sg-core" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.456045 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="596f1b95-b8a4-4444-a984-a26a031f595f" containerName="sg-core" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.456201 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="596f1b95-b8a4-4444-a984-a26a031f595f" containerName="sg-core" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.456220 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="596f1b95-b8a4-4444-a984-a26a031f595f" containerName="ceilometer-notification-agent" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.456231 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="596f1b95-b8a4-4444-a984-a26a031f595f" containerName="ceilometer-central-agent" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.456245 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="596f1b95-b8a4-4444-a984-a26a031f595f" containerName="proxy-httpd" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.457676 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.461181 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.461281 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.472983 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.560107 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32761457-3d54-47b9-a190-8801dc2f116f-run-httpd\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.560153 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.560182 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58q77\" (UniqueName: \"kubernetes.io/projected/32761457-3d54-47b9-a190-8801dc2f116f-kube-api-access-58q77\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.560443 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-scripts\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.560527 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-config-data\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.560597 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32761457-3d54-47b9-a190-8801dc2f116f-log-httpd\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.560655 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.662001 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.662043 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58q77\" (UniqueName: \"kubernetes.io/projected/32761457-3d54-47b9-a190-8801dc2f116f-kube-api-access-58q77\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.662104 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-scripts\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.662134 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-config-data\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.662167 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32761457-3d54-47b9-a190-8801dc2f116f-log-httpd\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.662191 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.662252 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32761457-3d54-47b9-a190-8801dc2f116f-run-httpd\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.662647 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32761457-3d54-47b9-a190-8801dc2f116f-run-httpd\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.664759 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32761457-3d54-47b9-a190-8801dc2f116f-log-httpd\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.666505 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.671410 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-config-data\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.672126 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-scripts\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.673308 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.688635 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58q77\" (UniqueName: \"kubernetes.io/projected/32761457-3d54-47b9-a190-8801dc2f116f-kube-api-access-58q77\") pod \"ceilometer-0\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " pod="openstack/ceilometer-0" Jan 28 10:02:18 crc kubenswrapper[4735]: I0128 10:02:18.777901 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:02:19 crc kubenswrapper[4735]: I0128 10:02:19.130771 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:19 crc kubenswrapper[4735]: I0128 10:02:19.225548 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:19 crc kubenswrapper[4735]: W0128 10:02:19.232802 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32761457_3d54_47b9_a190_8801dc2f116f.slice/crio-27534ba1c7243c16825f1293c39b6a675e5ad2a986a012c96f3446256ae252ea WatchSource:0}: Error finding container 27534ba1c7243c16825f1293c39b6a675e5ad2a986a012c96f3446256ae252ea: Status 404 returned error can't find the container with id 27534ba1c7243c16825f1293c39b6a675e5ad2a986a012c96f3446256ae252ea Jan 28 10:02:19 crc kubenswrapper[4735]: I0128 10:02:19.513460 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="596f1b95-b8a4-4444-a984-a26a031f595f" path="/var/lib/kubelet/pods/596f1b95-b8a4-4444-a984-a26a031f595f/volumes" Jan 28 10:02:20 crc kubenswrapper[4735]: I0128 10:02:20.103846 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32761457-3d54-47b9-a190-8801dc2f116f","Type":"ContainerStarted","Data":"4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b"} Jan 28 10:02:20 crc kubenswrapper[4735]: I0128 10:02:20.104097 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32761457-3d54-47b9-a190-8801dc2f116f","Type":"ContainerStarted","Data":"27534ba1c7243c16825f1293c39b6a675e5ad2a986a012c96f3446256ae252ea"} Jan 28 10:02:22 crc kubenswrapper[4735]: I0128 10:02:22.396680 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-bdcb89768-499m5" podUID="160ae085-44c0-4cef-9854-b0686547b43c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 28 10:02:24 crc kubenswrapper[4735]: I0128 10:02:24.133075 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-79b9899549-8qxf5" Jan 28 10:02:25 crc kubenswrapper[4735]: I0128 10:02:25.112085 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:26 crc kubenswrapper[4735]: I0128 10:02:26.164348 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32761457-3d54-47b9-a190-8801dc2f116f","Type":"ContainerStarted","Data":"d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc"} Jan 28 10:02:26 crc kubenswrapper[4735]: I0128 10:02:26.166073 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9360da99-a521-416d-868b-69526589a422","Type":"ContainerStarted","Data":"defa252ecc18e20e3fed4c7a0c730e4f959f31b12ff2898167a3f7e7957d61c9"} Jan 28 10:02:26 crc kubenswrapper[4735]: I0128 10:02:26.187593 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.127434797 podStartE2EDuration="12.187572583s" podCreationTimestamp="2026-01-28 10:02:14 +0000 UTC" firstStartedPulling="2026-01-28 10:02:15.859993083 +0000 UTC m=+1189.009657466" lastFinishedPulling="2026-01-28 10:02:25.920130879 +0000 UTC m=+1199.069795252" observedRunningTime="2026-01-28 10:02:26.182197949 +0000 UTC m=+1199.331862332" watchObservedRunningTime="2026-01-28 10:02:26.187572583 +0000 UTC m=+1199.337236966" Jan 28 10:02:26 crc kubenswrapper[4735]: I0128 10:02:26.543005 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77978c49d9-227fz" Jan 28 10:02:26 crc kubenswrapper[4735]: I0128 10:02:26.637128 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-78b5d6cfb6-dvwt2"] Jan 28 10:02:26 crc kubenswrapper[4735]: I0128 10:02:26.637581 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-78b5d6cfb6-dvwt2" podUID="71a6cc2d-71fb-4872-a42f-967a43beb910" containerName="neutron-api" containerID="cri-o://fd69ba488e747c3743dd6b999c99cc985490a9f4db64fe0bd212ba5843dfd4a6" gracePeriod=30 Jan 28 10:02:26 crc kubenswrapper[4735]: I0128 10:02:26.638112 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-78b5d6cfb6-dvwt2" podUID="71a6cc2d-71fb-4872-a42f-967a43beb910" containerName="neutron-httpd" containerID="cri-o://14421055e39f0220830ac82638f32ebf9c7f8520cc4ffcdee3c4177c88b3d29a" gracePeriod=30 Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.185052 4735 generic.go:334] "Generic (PLEG): container finished" podID="71a6cc2d-71fb-4872-a42f-967a43beb910" containerID="14421055e39f0220830ac82638f32ebf9c7f8520cc4ffcdee3c4177c88b3d29a" exitCode=0 Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.185269 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78b5d6cfb6-dvwt2" event={"ID":"71a6cc2d-71fb-4872-a42f-967a43beb910","Type":"ContainerDied","Data":"14421055e39f0220830ac82638f32ebf9c7f8520cc4ffcdee3c4177c88b3d29a"} Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.187565 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32761457-3d54-47b9-a190-8801dc2f116f","Type":"ContainerStarted","Data":"e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659"} Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.190295 4735 generic.go:334] "Generic (PLEG): container finished" podID="160ae085-44c0-4cef-9854-b0686547b43c" containerID="dbec602550a61cd364029656e02575fa14e978705c0893330895bcbcbd87ac47" exitCode=137 Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.190554 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bdcb89768-499m5" event={"ID":"160ae085-44c0-4cef-9854-b0686547b43c","Type":"ContainerDied","Data":"dbec602550a61cd364029656e02575fa14e978705c0893330895bcbcbd87ac47"} Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.451911 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.582032 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/160ae085-44c0-4cef-9854-b0686547b43c-config-data\") pod \"160ae085-44c0-4cef-9854-b0686547b43c\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.582090 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkd2x\" (UniqueName: \"kubernetes.io/projected/160ae085-44c0-4cef-9854-b0686547b43c-kube-api-access-kkd2x\") pod \"160ae085-44c0-4cef-9854-b0686547b43c\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.582145 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-horizon-secret-key\") pod \"160ae085-44c0-4cef-9854-b0686547b43c\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.582179 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-horizon-tls-certs\") pod \"160ae085-44c0-4cef-9854-b0686547b43c\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.582217 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/160ae085-44c0-4cef-9854-b0686547b43c-scripts\") pod \"160ae085-44c0-4cef-9854-b0686547b43c\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.582273 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/160ae085-44c0-4cef-9854-b0686547b43c-logs\") pod \"160ae085-44c0-4cef-9854-b0686547b43c\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.582409 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-combined-ca-bundle\") pod \"160ae085-44c0-4cef-9854-b0686547b43c\" (UID: \"160ae085-44c0-4cef-9854-b0686547b43c\") " Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.589594 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/160ae085-44c0-4cef-9854-b0686547b43c-logs" (OuterVolumeSpecName: "logs") pod "160ae085-44c0-4cef-9854-b0686547b43c" (UID: "160ae085-44c0-4cef-9854-b0686547b43c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.589713 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "160ae085-44c0-4cef-9854-b0686547b43c" (UID: "160ae085-44c0-4cef-9854-b0686547b43c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.593606 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/160ae085-44c0-4cef-9854-b0686547b43c-kube-api-access-kkd2x" (OuterVolumeSpecName: "kube-api-access-kkd2x") pod "160ae085-44c0-4cef-9854-b0686547b43c" (UID: "160ae085-44c0-4cef-9854-b0686547b43c"). InnerVolumeSpecName "kube-api-access-kkd2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.614933 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/160ae085-44c0-4cef-9854-b0686547b43c-config-data" (OuterVolumeSpecName: "config-data") pod "160ae085-44c0-4cef-9854-b0686547b43c" (UID: "160ae085-44c0-4cef-9854-b0686547b43c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.615060 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "160ae085-44c0-4cef-9854-b0686547b43c" (UID: "160ae085-44c0-4cef-9854-b0686547b43c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.631516 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/160ae085-44c0-4cef-9854-b0686547b43c-scripts" (OuterVolumeSpecName: "scripts") pod "160ae085-44c0-4cef-9854-b0686547b43c" (UID: "160ae085-44c0-4cef-9854-b0686547b43c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.666016 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "160ae085-44c0-4cef-9854-b0686547b43c" (UID: "160ae085-44c0-4cef-9854-b0686547b43c"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.685116 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.685157 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/160ae085-44c0-4cef-9854-b0686547b43c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.685239 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkd2x\" (UniqueName: \"kubernetes.io/projected/160ae085-44c0-4cef-9854-b0686547b43c-kube-api-access-kkd2x\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.685255 4735 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.685268 4735 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/160ae085-44c0-4cef-9854-b0686547b43c-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.685278 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/160ae085-44c0-4cef-9854-b0686547b43c-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:27 crc kubenswrapper[4735]: I0128 10:02:27.685288 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/160ae085-44c0-4cef-9854-b0686547b43c-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:28 crc kubenswrapper[4735]: I0128 10:02:28.199858 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bdcb89768-499m5" event={"ID":"160ae085-44c0-4cef-9854-b0686547b43c","Type":"ContainerDied","Data":"3ca923335ce604c00a3c11a7a63a2f3a3757a0440e8a0e6b68ff721259172bac"} Jan 28 10:02:28 crc kubenswrapper[4735]: I0128 10:02:28.199916 4735 scope.go:117] "RemoveContainer" containerID="8367909f7f9f94db3f14f8c070c49e1d38bee064d5e8eeb3fa42c652c31be1f9" Jan 28 10:02:28 crc kubenswrapper[4735]: I0128 10:02:28.199923 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bdcb89768-499m5" Jan 28 10:02:28 crc kubenswrapper[4735]: I0128 10:02:28.234331 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bdcb89768-499m5"] Jan 28 10:02:28 crc kubenswrapper[4735]: I0128 10:02:28.236136 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-b87b5c6d9-94mvs" podUID="58725e47-7476-44f5-ad7b-ccd74d87aace" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9696/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 10:02:28 crc kubenswrapper[4735]: I0128 10:02:28.243168 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-bdcb89768-499m5"] Jan 28 10:02:28 crc kubenswrapper[4735]: I0128 10:02:28.413409 4735 scope.go:117] "RemoveContainer" containerID="dbec602550a61cd364029656e02575fa14e978705c0893330895bcbcbd87ac47" Jan 28 10:02:29 crc kubenswrapper[4735]: I0128 10:02:29.210716 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32761457-3d54-47b9-a190-8801dc2f116f","Type":"ContainerStarted","Data":"cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90"} Jan 28 10:02:29 crc kubenswrapper[4735]: I0128 10:02:29.211138 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 10:02:29 crc kubenswrapper[4735]: I0128 10:02:29.210888 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32761457-3d54-47b9-a190-8801dc2f116f" containerName="sg-core" containerID="cri-o://e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659" gracePeriod=30 Jan 28 10:02:29 crc kubenswrapper[4735]: I0128 10:02:29.210872 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32761457-3d54-47b9-a190-8801dc2f116f" containerName="ceilometer-central-agent" containerID="cri-o://4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b" gracePeriod=30 Jan 28 10:02:29 crc kubenswrapper[4735]: I0128 10:02:29.210907 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32761457-3d54-47b9-a190-8801dc2f116f" containerName="proxy-httpd" containerID="cri-o://cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90" gracePeriod=30 Jan 28 10:02:29 crc kubenswrapper[4735]: I0128 10:02:29.210923 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32761457-3d54-47b9-a190-8801dc2f116f" containerName="ceilometer-notification-agent" containerID="cri-o://d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc" gracePeriod=30 Jan 28 10:02:29 crc kubenswrapper[4735]: I0128 10:02:29.245358 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.869487748 podStartE2EDuration="11.245335878s" podCreationTimestamp="2026-01-28 10:02:18 +0000 UTC" firstStartedPulling="2026-01-28 10:02:19.237838056 +0000 UTC m=+1192.387502429" lastFinishedPulling="2026-01-28 10:02:28.613686186 +0000 UTC m=+1201.763350559" observedRunningTime="2026-01-28 10:02:29.230516922 +0000 UTC m=+1202.380181295" watchObservedRunningTime="2026-01-28 10:02:29.245335878 +0000 UTC m=+1202.395000251" Jan 28 10:02:29 crc kubenswrapper[4735]: I0128 10:02:29.430719 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:02:29 crc kubenswrapper[4735]: I0128 10:02:29.434165 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="76060949-a1d9-4101-9d17-adc2e359004b" containerName="glance-log" containerID="cri-o://3cea8e7785c4c8ea37002880b27c8a540b10ef30fd88cad4ad6e91e835422542" gracePeriod=30 Jan 28 10:02:29 crc kubenswrapper[4735]: I0128 10:02:29.434372 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="76060949-a1d9-4101-9d17-adc2e359004b" containerName="glance-httpd" containerID="cri-o://e9312f4c6d7769097e542cc0cf85ffe9767e0e4a021f06a13ac0567dab784b1a" gracePeriod=30 Jan 28 10:02:29 crc kubenswrapper[4735]: I0128 10:02:29.523728 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="160ae085-44c0-4cef-9854-b0686547b43c" path="/var/lib/kubelet/pods/160ae085-44c0-4cef-9854-b0686547b43c/volumes" Jan 28 10:02:29 crc kubenswrapper[4735]: I0128 10:02:29.955969 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.131624 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-scripts\") pod \"32761457-3d54-47b9-a190-8801dc2f116f\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.131719 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32761457-3d54-47b9-a190-8801dc2f116f-log-httpd\") pod \"32761457-3d54-47b9-a190-8801dc2f116f\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.131811 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-sg-core-conf-yaml\") pod \"32761457-3d54-47b9-a190-8801dc2f116f\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.131849 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-combined-ca-bundle\") pod \"32761457-3d54-47b9-a190-8801dc2f116f\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.131881 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-config-data\") pod \"32761457-3d54-47b9-a190-8801dc2f116f\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.131916 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32761457-3d54-47b9-a190-8801dc2f116f-run-httpd\") pod \"32761457-3d54-47b9-a190-8801dc2f116f\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.131953 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58q77\" (UniqueName: \"kubernetes.io/projected/32761457-3d54-47b9-a190-8801dc2f116f-kube-api-access-58q77\") pod \"32761457-3d54-47b9-a190-8801dc2f116f\" (UID: \"32761457-3d54-47b9-a190-8801dc2f116f\") " Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.132441 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32761457-3d54-47b9-a190-8801dc2f116f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "32761457-3d54-47b9-a190-8801dc2f116f" (UID: "32761457-3d54-47b9-a190-8801dc2f116f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.132665 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32761457-3d54-47b9-a190-8801dc2f116f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "32761457-3d54-47b9-a190-8801dc2f116f" (UID: "32761457-3d54-47b9-a190-8801dc2f116f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.137576 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-scripts" (OuterVolumeSpecName: "scripts") pod "32761457-3d54-47b9-a190-8801dc2f116f" (UID: "32761457-3d54-47b9-a190-8801dc2f116f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.137572 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32761457-3d54-47b9-a190-8801dc2f116f-kube-api-access-58q77" (OuterVolumeSpecName: "kube-api-access-58q77") pod "32761457-3d54-47b9-a190-8801dc2f116f" (UID: "32761457-3d54-47b9-a190-8801dc2f116f"). InnerVolumeSpecName "kube-api-access-58q77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.163554 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "32761457-3d54-47b9-a190-8801dc2f116f" (UID: "32761457-3d54-47b9-a190-8801dc2f116f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.231940 4735 generic.go:334] "Generic (PLEG): container finished" podID="76060949-a1d9-4101-9d17-adc2e359004b" containerID="3cea8e7785c4c8ea37002880b27c8a540b10ef30fd88cad4ad6e91e835422542" exitCode=143 Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.232039 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76060949-a1d9-4101-9d17-adc2e359004b","Type":"ContainerDied","Data":"3cea8e7785c4c8ea37002880b27c8a540b10ef30fd88cad4ad6e91e835422542"} Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.237232 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32761457-3d54-47b9-a190-8801dc2f116f" (UID: "32761457-3d54-47b9-a190-8801dc2f116f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.238019 4735 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.238046 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.238055 4735 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32761457-3d54-47b9-a190-8801dc2f116f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.238064 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58q77\" (UniqueName: \"kubernetes.io/projected/32761457-3d54-47b9-a190-8801dc2f116f-kube-api-access-58q77\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.238075 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.238084 4735 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32761457-3d54-47b9-a190-8801dc2f116f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.241930 4735 generic.go:334] "Generic (PLEG): container finished" podID="32761457-3d54-47b9-a190-8801dc2f116f" containerID="cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90" exitCode=0 Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.241959 4735 generic.go:334] "Generic (PLEG): container finished" podID="32761457-3d54-47b9-a190-8801dc2f116f" containerID="e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659" exitCode=2 Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.241969 4735 generic.go:334] "Generic (PLEG): container finished" podID="32761457-3d54-47b9-a190-8801dc2f116f" containerID="d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc" exitCode=0 Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.241976 4735 generic.go:334] "Generic (PLEG): container finished" podID="32761457-3d54-47b9-a190-8801dc2f116f" containerID="4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b" exitCode=0 Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.241995 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32761457-3d54-47b9-a190-8801dc2f116f","Type":"ContainerDied","Data":"cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90"} Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.242019 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32761457-3d54-47b9-a190-8801dc2f116f","Type":"ContainerDied","Data":"e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659"} Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.242028 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32761457-3d54-47b9-a190-8801dc2f116f","Type":"ContainerDied","Data":"d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc"} Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.242038 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32761457-3d54-47b9-a190-8801dc2f116f","Type":"ContainerDied","Data":"4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b"} Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.242046 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32761457-3d54-47b9-a190-8801dc2f116f","Type":"ContainerDied","Data":"27534ba1c7243c16825f1293c39b6a675e5ad2a986a012c96f3446256ae252ea"} Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.242053 4735 scope.go:117] "RemoveContainer" containerID="cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.242064 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.261950 4735 scope.go:117] "RemoveContainer" containerID="e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.273867 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-config-data" (OuterVolumeSpecName: "config-data") pod "32761457-3d54-47b9-a190-8801dc2f116f" (UID: "32761457-3d54-47b9-a190-8801dc2f116f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.278690 4735 scope.go:117] "RemoveContainer" containerID="d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.296648 4735 scope.go:117] "RemoveContainer" containerID="4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.314629 4735 scope.go:117] "RemoveContainer" containerID="cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90" Jan 28 10:02:30 crc kubenswrapper[4735]: E0128 10:02:30.315099 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90\": container with ID starting with cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90 not found: ID does not exist" containerID="cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.315139 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90"} err="failed to get container status \"cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90\": rpc error: code = NotFound desc = could not find container \"cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90\": container with ID starting with cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90 not found: ID does not exist" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.315165 4735 scope.go:117] "RemoveContainer" containerID="e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659" Jan 28 10:02:30 crc kubenswrapper[4735]: E0128 10:02:30.315463 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659\": container with ID starting with e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659 not found: ID does not exist" containerID="e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.315529 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659"} err="failed to get container status \"e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659\": rpc error: code = NotFound desc = could not find container \"e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659\": container with ID starting with e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659 not found: ID does not exist" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.315546 4735 scope.go:117] "RemoveContainer" containerID="d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc" Jan 28 10:02:30 crc kubenswrapper[4735]: E0128 10:02:30.315957 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc\": container with ID starting with d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc not found: ID does not exist" containerID="d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.315990 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc"} err="failed to get container status \"d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc\": rpc error: code = NotFound desc = could not find container \"d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc\": container with ID starting with d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc not found: ID does not exist" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.316007 4735 scope.go:117] "RemoveContainer" containerID="4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b" Jan 28 10:02:30 crc kubenswrapper[4735]: E0128 10:02:30.316228 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b\": container with ID starting with 4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b not found: ID does not exist" containerID="4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.316256 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b"} err="failed to get container status \"4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b\": rpc error: code = NotFound desc = could not find container \"4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b\": container with ID starting with 4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b not found: ID does not exist" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.316272 4735 scope.go:117] "RemoveContainer" containerID="cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.316479 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90"} err="failed to get container status \"cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90\": rpc error: code = NotFound desc = could not find container \"cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90\": container with ID starting with cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90 not found: ID does not exist" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.316504 4735 scope.go:117] "RemoveContainer" containerID="e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.316683 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659"} err="failed to get container status \"e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659\": rpc error: code = NotFound desc = could not find container \"e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659\": container with ID starting with e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659 not found: ID does not exist" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.316707 4735 scope.go:117] "RemoveContainer" containerID="d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.316938 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc"} err="failed to get container status \"d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc\": rpc error: code = NotFound desc = could not find container \"d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc\": container with ID starting with d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc not found: ID does not exist" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.316965 4735 scope.go:117] "RemoveContainer" containerID="4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.330328 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b"} err="failed to get container status \"4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b\": rpc error: code = NotFound desc = could not find container \"4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b\": container with ID starting with 4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b not found: ID does not exist" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.330383 4735 scope.go:117] "RemoveContainer" containerID="cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.330826 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90"} err="failed to get container status \"cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90\": rpc error: code = NotFound desc = could not find container \"cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90\": container with ID starting with cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90 not found: ID does not exist" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.330862 4735 scope.go:117] "RemoveContainer" containerID="e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.331146 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659"} err="failed to get container status \"e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659\": rpc error: code = NotFound desc = could not find container \"e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659\": container with ID starting with e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659 not found: ID does not exist" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.331174 4735 scope.go:117] "RemoveContainer" containerID="d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.331429 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc"} err="failed to get container status \"d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc\": rpc error: code = NotFound desc = could not find container \"d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc\": container with ID starting with d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc not found: ID does not exist" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.331447 4735 scope.go:117] "RemoveContainer" containerID="4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.331662 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b"} err="failed to get container status \"4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b\": rpc error: code = NotFound desc = could not find container \"4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b\": container with ID starting with 4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b not found: ID does not exist" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.331681 4735 scope.go:117] "RemoveContainer" containerID="cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.331936 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90"} err="failed to get container status \"cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90\": rpc error: code = NotFound desc = could not find container \"cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90\": container with ID starting with cb461ef0cf19fabb9602a1d66bb4469f538f67ebb20cfb2957031f7aeb190e90 not found: ID does not exist" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.331955 4735 scope.go:117] "RemoveContainer" containerID="e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.332155 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659"} err="failed to get container status \"e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659\": rpc error: code = NotFound desc = could not find container \"e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659\": container with ID starting with e9cfbbffc64aa93af2898c1e68df4d967e4c5c2741b1905fde520b43fdc44659 not found: ID does not exist" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.332172 4735 scope.go:117] "RemoveContainer" containerID="d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.332383 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc"} err="failed to get container status \"d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc\": rpc error: code = NotFound desc = could not find container \"d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc\": container with ID starting with d13b831babd18900cc2d28138cdd40e84cc364b6e219403121e062cf9b42f7dc not found: ID does not exist" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.332407 4735 scope.go:117] "RemoveContainer" containerID="4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.332632 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b"} err="failed to get container status \"4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b\": rpc error: code = NotFound desc = could not find container \"4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b\": container with ID starting with 4c5b99867a1d65aa61e9096a34442c354cdefe280b41638dcf8c43d554e92e5b not found: ID does not exist" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.344148 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32761457-3d54-47b9-a190-8801dc2f116f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.574327 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.581548 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.608994 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:30 crc kubenswrapper[4735]: E0128 10:02:30.609516 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160ae085-44c0-4cef-9854-b0686547b43c" containerName="horizon" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.609532 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="160ae085-44c0-4cef-9854-b0686547b43c" containerName="horizon" Jan 28 10:02:30 crc kubenswrapper[4735]: E0128 10:02:30.609549 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32761457-3d54-47b9-a190-8801dc2f116f" containerName="sg-core" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.609555 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="32761457-3d54-47b9-a190-8801dc2f116f" containerName="sg-core" Jan 28 10:02:30 crc kubenswrapper[4735]: E0128 10:02:30.609570 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32761457-3d54-47b9-a190-8801dc2f116f" containerName="ceilometer-notification-agent" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.609576 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="32761457-3d54-47b9-a190-8801dc2f116f" containerName="ceilometer-notification-agent" Jan 28 10:02:30 crc kubenswrapper[4735]: E0128 10:02:30.609584 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32761457-3d54-47b9-a190-8801dc2f116f" containerName="proxy-httpd" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.609590 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="32761457-3d54-47b9-a190-8801dc2f116f" containerName="proxy-httpd" Jan 28 10:02:30 crc kubenswrapper[4735]: E0128 10:02:30.609600 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160ae085-44c0-4cef-9854-b0686547b43c" containerName="horizon-log" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.609607 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="160ae085-44c0-4cef-9854-b0686547b43c" containerName="horizon-log" Jan 28 10:02:30 crc kubenswrapper[4735]: E0128 10:02:30.609619 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32761457-3d54-47b9-a190-8801dc2f116f" containerName="ceilometer-central-agent" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.609626 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="32761457-3d54-47b9-a190-8801dc2f116f" containerName="ceilometer-central-agent" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.609823 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="32761457-3d54-47b9-a190-8801dc2f116f" containerName="ceilometer-central-agent" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.609837 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="160ae085-44c0-4cef-9854-b0686547b43c" containerName="horizon-log" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.609933 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="32761457-3d54-47b9-a190-8801dc2f116f" containerName="sg-core" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.609955 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="32761457-3d54-47b9-a190-8801dc2f116f" containerName="ceilometer-notification-agent" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.609970 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="32761457-3d54-47b9-a190-8801dc2f116f" containerName="proxy-httpd" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.609987 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="160ae085-44c0-4cef-9854-b0686547b43c" containerName="horizon" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.611950 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.617589 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.620343 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.620426 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.751544 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.751583 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e618722b-4792-416d-8d97-abc07296800c-run-httpd\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.751600 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e618722b-4792-416d-8d97-abc07296800c-log-httpd\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.751659 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnfdq\" (UniqueName: \"kubernetes.io/projected/e618722b-4792-416d-8d97-abc07296800c-kube-api-access-hnfdq\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.751686 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-config-data\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.751731 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-scripts\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.751789 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.853490 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnfdq\" (UniqueName: \"kubernetes.io/projected/e618722b-4792-416d-8d97-abc07296800c-kube-api-access-hnfdq\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.853587 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-config-data\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.853661 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-scripts\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.853720 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.854057 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.854094 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e618722b-4792-416d-8d97-abc07296800c-run-httpd\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.854125 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e618722b-4792-416d-8d97-abc07296800c-log-httpd\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.854623 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e618722b-4792-416d-8d97-abc07296800c-run-httpd\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.854804 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e618722b-4792-416d-8d97-abc07296800c-log-httpd\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.859017 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.859130 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.859279 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-config-data\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.863300 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-scripts\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.880990 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnfdq\" (UniqueName: \"kubernetes.io/projected/e618722b-4792-416d-8d97-abc07296800c-kube-api-access-hnfdq\") pod \"ceilometer-0\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " pod="openstack/ceilometer-0" Jan 28 10:02:30 crc kubenswrapper[4735]: I0128 10:02:30.950683 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.269065 4735 generic.go:334] "Generic (PLEG): container finished" podID="71a6cc2d-71fb-4872-a42f-967a43beb910" containerID="fd69ba488e747c3743dd6b999c99cc985490a9f4db64fe0bd212ba5843dfd4a6" exitCode=0 Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.269136 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78b5d6cfb6-dvwt2" event={"ID":"71a6cc2d-71fb-4872-a42f-967a43beb910","Type":"ContainerDied","Data":"fd69ba488e747c3743dd6b999c99cc985490a9f4db64fe0bd212ba5843dfd4a6"} Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.440597 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.520360 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32761457-3d54-47b9-a190-8801dc2f116f" path="/var/lib/kubelet/pods/32761457-3d54-47b9-a190-8801dc2f116f/volumes" Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.760687 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.781014 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-ovndb-tls-certs\") pod \"71a6cc2d-71fb-4872-a42f-967a43beb910\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.781067 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjwjc\" (UniqueName: \"kubernetes.io/projected/71a6cc2d-71fb-4872-a42f-967a43beb910-kube-api-access-bjwjc\") pod \"71a6cc2d-71fb-4872-a42f-967a43beb910\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.781118 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-config\") pod \"71a6cc2d-71fb-4872-a42f-967a43beb910\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.781138 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-combined-ca-bundle\") pod \"71a6cc2d-71fb-4872-a42f-967a43beb910\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.781190 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-httpd-config\") pod \"71a6cc2d-71fb-4872-a42f-967a43beb910\" (UID: \"71a6cc2d-71fb-4872-a42f-967a43beb910\") " Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.789902 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "71a6cc2d-71fb-4872-a42f-967a43beb910" (UID: "71a6cc2d-71fb-4872-a42f-967a43beb910"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.813807 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71a6cc2d-71fb-4872-a42f-967a43beb910-kube-api-access-bjwjc" (OuterVolumeSpecName: "kube-api-access-bjwjc") pod "71a6cc2d-71fb-4872-a42f-967a43beb910" (UID: "71a6cc2d-71fb-4872-a42f-967a43beb910"). InnerVolumeSpecName "kube-api-access-bjwjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.855374 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71a6cc2d-71fb-4872-a42f-967a43beb910" (UID: "71a6cc2d-71fb-4872-a42f-967a43beb910"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.855474 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "71a6cc2d-71fb-4872-a42f-967a43beb910" (UID: "71a6cc2d-71fb-4872-a42f-967a43beb910"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.874270 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-config" (OuterVolumeSpecName: "config") pod "71a6cc2d-71fb-4872-a42f-967a43beb910" (UID: "71a6cc2d-71fb-4872-a42f-967a43beb910"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.886606 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjwjc\" (UniqueName: \"kubernetes.io/projected/71a6cc2d-71fb-4872-a42f-967a43beb910-kube-api-access-bjwjc\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.886637 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.886649 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.886664 4735 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:31 crc kubenswrapper[4735]: I0128 10:02:31.886673 4735 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71a6cc2d-71fb-4872-a42f-967a43beb910-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:32 crc kubenswrapper[4735]: I0128 10:02:32.283335 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e618722b-4792-416d-8d97-abc07296800c","Type":"ContainerStarted","Data":"285ef7e4aeef6c7e4cac82847002bbc6583fc4a9c761cdc0790fc94d707c04ab"} Jan 28 10:02:32 crc kubenswrapper[4735]: I0128 10:02:32.283647 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e618722b-4792-416d-8d97-abc07296800c","Type":"ContainerStarted","Data":"76b5866571a25b1c05a6162a7fd6271cf591fc0b0a43432d1b846ded81d41fd2"} Jan 28 10:02:32 crc kubenswrapper[4735]: I0128 10:02:32.285276 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78b5d6cfb6-dvwt2" event={"ID":"71a6cc2d-71fb-4872-a42f-967a43beb910","Type":"ContainerDied","Data":"4c21c2c7b241230cd00f8ba96b69233586a492cff3fcf85d37935c6c876340e6"} Jan 28 10:02:32 crc kubenswrapper[4735]: I0128 10:02:32.285335 4735 scope.go:117] "RemoveContainer" containerID="14421055e39f0220830ac82638f32ebf9c7f8520cc4ffcdee3c4177c88b3d29a" Jan 28 10:02:32 crc kubenswrapper[4735]: I0128 10:02:32.285336 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78b5d6cfb6-dvwt2" Jan 28 10:02:32 crc kubenswrapper[4735]: I0128 10:02:32.318389 4735 scope.go:117] "RemoveContainer" containerID="fd69ba488e747c3743dd6b999c99cc985490a9f4db64fe0bd212ba5843dfd4a6" Jan 28 10:02:32 crc kubenswrapper[4735]: I0128 10:02:32.339905 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-78b5d6cfb6-dvwt2"] Jan 28 10:02:32 crc kubenswrapper[4735]: I0128 10:02:32.353093 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-78b5d6cfb6-dvwt2"] Jan 28 10:02:32 crc kubenswrapper[4735]: I0128 10:02:32.602202 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="76060949-a1d9-4101-9d17-adc2e359004b" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.157:9292/healthcheck\": read tcp 10.217.0.2:51874->10.217.0.157:9292: read: connection reset by peer" Jan 28 10:02:32 crc kubenswrapper[4735]: I0128 10:02:32.602231 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="76060949-a1d9-4101-9d17-adc2e359004b" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.157:9292/healthcheck\": read tcp 10.217.0.2:51872->10.217.0.157:9292: read: connection reset by peer" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.094642 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.122372 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76060949-a1d9-4101-9d17-adc2e359004b-httpd-run\") pod \"76060949-a1d9-4101-9d17-adc2e359004b\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.122432 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-scripts\") pod \"76060949-a1d9-4101-9d17-adc2e359004b\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.122448 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-internal-tls-certs\") pod \"76060949-a1d9-4101-9d17-adc2e359004b\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.122473 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ls6g\" (UniqueName: \"kubernetes.io/projected/76060949-a1d9-4101-9d17-adc2e359004b-kube-api-access-6ls6g\") pod \"76060949-a1d9-4101-9d17-adc2e359004b\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.122502 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-combined-ca-bundle\") pod \"76060949-a1d9-4101-9d17-adc2e359004b\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.122548 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"76060949-a1d9-4101-9d17-adc2e359004b\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.122602 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76060949-a1d9-4101-9d17-adc2e359004b-logs\") pod \"76060949-a1d9-4101-9d17-adc2e359004b\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.122996 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-config-data\") pod \"76060949-a1d9-4101-9d17-adc2e359004b\" (UID: \"76060949-a1d9-4101-9d17-adc2e359004b\") " Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.131762 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "76060949-a1d9-4101-9d17-adc2e359004b" (UID: "76060949-a1d9-4101-9d17-adc2e359004b"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.134290 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76060949-a1d9-4101-9d17-adc2e359004b-logs" (OuterVolumeSpecName: "logs") pod "76060949-a1d9-4101-9d17-adc2e359004b" (UID: "76060949-a1d9-4101-9d17-adc2e359004b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.134922 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76060949-a1d9-4101-9d17-adc2e359004b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "76060949-a1d9-4101-9d17-adc2e359004b" (UID: "76060949-a1d9-4101-9d17-adc2e359004b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.138796 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-scripts" (OuterVolumeSpecName: "scripts") pod "76060949-a1d9-4101-9d17-adc2e359004b" (UID: "76060949-a1d9-4101-9d17-adc2e359004b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.139240 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76060949-a1d9-4101-9d17-adc2e359004b-kube-api-access-6ls6g" (OuterVolumeSpecName: "kube-api-access-6ls6g") pod "76060949-a1d9-4101-9d17-adc2e359004b" (UID: "76060949-a1d9-4101-9d17-adc2e359004b"). InnerVolumeSpecName "kube-api-access-6ls6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.165459 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76060949-a1d9-4101-9d17-adc2e359004b" (UID: "76060949-a1d9-4101-9d17-adc2e359004b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.208254 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "76060949-a1d9-4101-9d17-adc2e359004b" (UID: "76060949-a1d9-4101-9d17-adc2e359004b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.213459 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-config-data" (OuterVolumeSpecName: "config-data") pod "76060949-a1d9-4101-9d17-adc2e359004b" (UID: "76060949-a1d9-4101-9d17-adc2e359004b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.225208 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76060949-a1d9-4101-9d17-adc2e359004b-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.225242 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.225254 4735 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76060949-a1d9-4101-9d17-adc2e359004b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.225262 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.225270 4735 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.225283 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ls6g\" (UniqueName: \"kubernetes.io/projected/76060949-a1d9-4101-9d17-adc2e359004b-kube-api-access-6ls6g\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.225292 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76060949-a1d9-4101-9d17-adc2e359004b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.225317 4735 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.246449 4735 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.294756 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e618722b-4792-416d-8d97-abc07296800c","Type":"ContainerStarted","Data":"6ed61d0e32f911ebab66d1e90e477b6a84adfe78030d19bebb080c8279e0d730"} Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.296427 4735 generic.go:334] "Generic (PLEG): container finished" podID="76060949-a1d9-4101-9d17-adc2e359004b" containerID="e9312f4c6d7769097e542cc0cf85ffe9767e0e4a021f06a13ac0567dab784b1a" exitCode=0 Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.296489 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76060949-a1d9-4101-9d17-adc2e359004b","Type":"ContainerDied","Data":"e9312f4c6d7769097e542cc0cf85ffe9767e0e4a021f06a13ac0567dab784b1a"} Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.296515 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"76060949-a1d9-4101-9d17-adc2e359004b","Type":"ContainerDied","Data":"76296177fc0cc0dacc10f1ac6e9bd4e8fae8a74a33f4da2c77e0a43210c78ea9"} Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.296512 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.296533 4735 scope.go:117] "RemoveContainer" containerID="e9312f4c6d7769097e542cc0cf85ffe9767e0e4a021f06a13ac0567dab784b1a" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.317104 4735 scope.go:117] "RemoveContainer" containerID="3cea8e7785c4c8ea37002880b27c8a540b10ef30fd88cad4ad6e91e835422542" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.327945 4735 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.342607 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.370457 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.379539 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:02:33 crc kubenswrapper[4735]: E0128 10:02:33.379921 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a6cc2d-71fb-4872-a42f-967a43beb910" containerName="neutron-api" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.379933 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a6cc2d-71fb-4872-a42f-967a43beb910" containerName="neutron-api" Jan 28 10:02:33 crc kubenswrapper[4735]: E0128 10:02:33.379956 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76060949-a1d9-4101-9d17-adc2e359004b" containerName="glance-httpd" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.379962 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="76060949-a1d9-4101-9d17-adc2e359004b" containerName="glance-httpd" Jan 28 10:02:33 crc kubenswrapper[4735]: E0128 10:02:33.379979 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76060949-a1d9-4101-9d17-adc2e359004b" containerName="glance-log" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.379985 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="76060949-a1d9-4101-9d17-adc2e359004b" containerName="glance-log" Jan 28 10:02:33 crc kubenswrapper[4735]: E0128 10:02:33.379992 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a6cc2d-71fb-4872-a42f-967a43beb910" containerName="neutron-httpd" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.380000 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a6cc2d-71fb-4872-a42f-967a43beb910" containerName="neutron-httpd" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.380270 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="76060949-a1d9-4101-9d17-adc2e359004b" containerName="glance-log" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.380290 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a6cc2d-71fb-4872-a42f-967a43beb910" containerName="neutron-httpd" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.380302 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a6cc2d-71fb-4872-a42f-967a43beb910" containerName="neutron-api" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.380313 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="76060949-a1d9-4101-9d17-adc2e359004b" containerName="glance-httpd" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.381322 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.384623 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.384706 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.396375 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.410519 4735 scope.go:117] "RemoveContainer" containerID="e9312f4c6d7769097e542cc0cf85ffe9767e0e4a021f06a13ac0567dab784b1a" Jan 28 10:02:33 crc kubenswrapper[4735]: E0128 10:02:33.412146 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9312f4c6d7769097e542cc0cf85ffe9767e0e4a021f06a13ac0567dab784b1a\": container with ID starting with e9312f4c6d7769097e542cc0cf85ffe9767e0e4a021f06a13ac0567dab784b1a not found: ID does not exist" containerID="e9312f4c6d7769097e542cc0cf85ffe9767e0e4a021f06a13ac0567dab784b1a" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.412184 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9312f4c6d7769097e542cc0cf85ffe9767e0e4a021f06a13ac0567dab784b1a"} err="failed to get container status \"e9312f4c6d7769097e542cc0cf85ffe9767e0e4a021f06a13ac0567dab784b1a\": rpc error: code = NotFound desc = could not find container \"e9312f4c6d7769097e542cc0cf85ffe9767e0e4a021f06a13ac0567dab784b1a\": container with ID starting with e9312f4c6d7769097e542cc0cf85ffe9767e0e4a021f06a13ac0567dab784b1a not found: ID does not exist" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.412207 4735 scope.go:117] "RemoveContainer" containerID="3cea8e7785c4c8ea37002880b27c8a540b10ef30fd88cad4ad6e91e835422542" Jan 28 10:02:33 crc kubenswrapper[4735]: E0128 10:02:33.412506 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cea8e7785c4c8ea37002880b27c8a540b10ef30fd88cad4ad6e91e835422542\": container with ID starting with 3cea8e7785c4c8ea37002880b27c8a540b10ef30fd88cad4ad6e91e835422542 not found: ID does not exist" containerID="3cea8e7785c4c8ea37002880b27c8a540b10ef30fd88cad4ad6e91e835422542" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.412526 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cea8e7785c4c8ea37002880b27c8a540b10ef30fd88cad4ad6e91e835422542"} err="failed to get container status \"3cea8e7785c4c8ea37002880b27c8a540b10ef30fd88cad4ad6e91e835422542\": rpc error: code = NotFound desc = could not find container \"3cea8e7785c4c8ea37002880b27c8a540b10ef30fd88cad4ad6e91e835422542\": container with ID starting with 3cea8e7785c4c8ea37002880b27c8a540b10ef30fd88cad4ad6e91e835422542 not found: ID does not exist" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.429851 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d08a0a-642a-404a-a47a-4f98f7d211b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.429934 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1d08a0a-642a-404a-a47a-4f98f7d211b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.429962 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d08a0a-642a-404a-a47a-4f98f7d211b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.430017 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkgss\" (UniqueName: \"kubernetes.io/projected/f1d08a0a-642a-404a-a47a-4f98f7d211b0-kube-api-access-zkgss\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.430075 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1d08a0a-642a-404a-a47a-4f98f7d211b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.430117 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.430183 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d08a0a-642a-404a-a47a-4f98f7d211b0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.430211 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d08a0a-642a-404a-a47a-4f98f7d211b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.521473 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71a6cc2d-71fb-4872-a42f-967a43beb910" path="/var/lib/kubelet/pods/71a6cc2d-71fb-4872-a42f-967a43beb910/volumes" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.522384 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76060949-a1d9-4101-9d17-adc2e359004b" path="/var/lib/kubelet/pods/76060949-a1d9-4101-9d17-adc2e359004b/volumes" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.532047 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d08a0a-642a-404a-a47a-4f98f7d211b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.532134 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1d08a0a-642a-404a-a47a-4f98f7d211b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.532164 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d08a0a-642a-404a-a47a-4f98f7d211b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.532210 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkgss\" (UniqueName: \"kubernetes.io/projected/f1d08a0a-642a-404a-a47a-4f98f7d211b0-kube-api-access-zkgss\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.532274 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1d08a0a-642a-404a-a47a-4f98f7d211b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.532322 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.532347 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d08a0a-642a-404a-a47a-4f98f7d211b0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.532382 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d08a0a-642a-404a-a47a-4f98f7d211b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.534161 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.534359 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1d08a0a-642a-404a-a47a-4f98f7d211b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.535892 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1d08a0a-642a-404a-a47a-4f98f7d211b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.543930 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d08a0a-642a-404a-a47a-4f98f7d211b0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.548353 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d08a0a-642a-404a-a47a-4f98f7d211b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.548658 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d08a0a-642a-404a-a47a-4f98f7d211b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.550592 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d08a0a-642a-404a-a47a-4f98f7d211b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.560535 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkgss\" (UniqueName: \"kubernetes.io/projected/f1d08a0a-642a-404a-a47a-4f98f7d211b0-kube-api-access-zkgss\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.580182 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f1d08a0a-642a-404a-a47a-4f98f7d211b0\") " pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.628069 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dnxzq"] Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.629311 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dnxzq" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.636413 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dnxzq"] Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.713085 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.735662 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedf2e75-1cf8-45be-984b-0fec5c06081d-operator-scripts\") pod \"nova-api-db-create-dnxzq\" (UID: \"dedf2e75-1cf8-45be-984b-0fec5c06081d\") " pod="openstack/nova-api-db-create-dnxzq" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.735777 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7566c\" (UniqueName: \"kubernetes.io/projected/dedf2e75-1cf8-45be-984b-0fec5c06081d-kube-api-access-7566c\") pod \"nova-api-db-create-dnxzq\" (UID: \"dedf2e75-1cf8-45be-984b-0fec5c06081d\") " pod="openstack/nova-api-db-create-dnxzq" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.736684 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-dacf-account-create-update-2vtrf"] Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.737893 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dacf-account-create-update-2vtrf" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.744643 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.767951 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-wrv9l"] Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.769128 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wrv9l" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.785963 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wrv9l"] Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.793122 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-dacf-account-create-update-2vtrf"] Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.837164 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c0163a-15b4-41e5-ba99-c1d296c31c34-operator-scripts\") pod \"nova-api-dacf-account-create-update-2vtrf\" (UID: \"15c0163a-15b4-41e5-ba99-c1d296c31c34\") " pod="openstack/nova-api-dacf-account-create-update-2vtrf" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.837248 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7m7p\" (UniqueName: \"kubernetes.io/projected/15c0163a-15b4-41e5-ba99-c1d296c31c34-kube-api-access-p7m7p\") pod \"nova-api-dacf-account-create-update-2vtrf\" (UID: \"15c0163a-15b4-41e5-ba99-c1d296c31c34\") " pod="openstack/nova-api-dacf-account-create-update-2vtrf" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.837298 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedf2e75-1cf8-45be-984b-0fec5c06081d-operator-scripts\") pod \"nova-api-db-create-dnxzq\" (UID: \"dedf2e75-1cf8-45be-984b-0fec5c06081d\") " pod="openstack/nova-api-db-create-dnxzq" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.837391 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7566c\" (UniqueName: \"kubernetes.io/projected/dedf2e75-1cf8-45be-984b-0fec5c06081d-kube-api-access-7566c\") pod \"nova-api-db-create-dnxzq\" (UID: \"dedf2e75-1cf8-45be-984b-0fec5c06081d\") " pod="openstack/nova-api-db-create-dnxzq" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.838614 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedf2e75-1cf8-45be-984b-0fec5c06081d-operator-scripts\") pod \"nova-api-db-create-dnxzq\" (UID: \"dedf2e75-1cf8-45be-984b-0fec5c06081d\") " pod="openstack/nova-api-db-create-dnxzq" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.856634 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-lshmj"] Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.858134 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lshmj" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.861295 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7566c\" (UniqueName: \"kubernetes.io/projected/dedf2e75-1cf8-45be-984b-0fec5c06081d-kube-api-access-7566c\") pod \"nova-api-db-create-dnxzq\" (UID: \"dedf2e75-1cf8-45be-984b-0fec5c06081d\") " pod="openstack/nova-api-db-create-dnxzq" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.864886 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lshmj"] Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.939688 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-6cb2-account-create-update-x2mkl"] Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.940639 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c0163a-15b4-41e5-ba99-c1d296c31c34-operator-scripts\") pod \"nova-api-dacf-account-create-update-2vtrf\" (UID: \"15c0163a-15b4-41e5-ba99-c1d296c31c34\") " pod="openstack/nova-api-dacf-account-create-update-2vtrf" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.940710 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7m7p\" (UniqueName: \"kubernetes.io/projected/15c0163a-15b4-41e5-ba99-c1d296c31c34-kube-api-access-p7m7p\") pod \"nova-api-dacf-account-create-update-2vtrf\" (UID: \"15c0163a-15b4-41e5-ba99-c1d296c31c34\") " pod="openstack/nova-api-dacf-account-create-update-2vtrf" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.940806 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bc3d821-db59-43d8-8de6-290509114a4a-operator-scripts\") pod \"nova-cell0-db-create-wrv9l\" (UID: \"3bc3d821-db59-43d8-8de6-290509114a4a\") " pod="openstack/nova-cell0-db-create-wrv9l" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.940848 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sxwk\" (UniqueName: \"kubernetes.io/projected/3bc3d821-db59-43d8-8de6-290509114a4a-kube-api-access-8sxwk\") pod \"nova-cell0-db-create-wrv9l\" (UID: \"3bc3d821-db59-43d8-8de6-290509114a4a\") " pod="openstack/nova-cell0-db-create-wrv9l" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.941491 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c0163a-15b4-41e5-ba99-c1d296c31c34-operator-scripts\") pod \"nova-api-dacf-account-create-update-2vtrf\" (UID: \"15c0163a-15b4-41e5-ba99-c1d296c31c34\") " pod="openstack/nova-api-dacf-account-create-update-2vtrf" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.944899 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6cb2-account-create-update-x2mkl" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.947385 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.975514 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6cb2-account-create-update-x2mkl"] Jan 28 10:02:33 crc kubenswrapper[4735]: I0128 10:02:33.976067 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dnxzq" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.022176 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7m7p\" (UniqueName: \"kubernetes.io/projected/15c0163a-15b4-41e5-ba99-c1d296c31c34-kube-api-access-p7m7p\") pod \"nova-api-dacf-account-create-update-2vtrf\" (UID: \"15c0163a-15b4-41e5-ba99-c1d296c31c34\") " pod="openstack/nova-api-dacf-account-create-update-2vtrf" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.042714 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dksv\" (UniqueName: \"kubernetes.io/projected/4563dc5a-d5e3-41f7-a719-7eb1457bb219-kube-api-access-8dksv\") pod \"nova-cell1-db-create-lshmj\" (UID: \"4563dc5a-d5e3-41f7-a719-7eb1457bb219\") " pod="openstack/nova-cell1-db-create-lshmj" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.042808 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4563dc5a-d5e3-41f7-a719-7eb1457bb219-operator-scripts\") pod \"nova-cell1-db-create-lshmj\" (UID: \"4563dc5a-d5e3-41f7-a719-7eb1457bb219\") " pod="openstack/nova-cell1-db-create-lshmj" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.042862 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d63f54a-6505-499c-88db-dd3b04968076-operator-scripts\") pod \"nova-cell0-6cb2-account-create-update-x2mkl\" (UID: \"8d63f54a-6505-499c-88db-dd3b04968076\") " pod="openstack/nova-cell0-6cb2-account-create-update-x2mkl" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.042905 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bc3d821-db59-43d8-8de6-290509114a4a-operator-scripts\") pod \"nova-cell0-db-create-wrv9l\" (UID: \"3bc3d821-db59-43d8-8de6-290509114a4a\") " pod="openstack/nova-cell0-db-create-wrv9l" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.042970 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sxwk\" (UniqueName: \"kubernetes.io/projected/3bc3d821-db59-43d8-8de6-290509114a4a-kube-api-access-8sxwk\") pod \"nova-cell0-db-create-wrv9l\" (UID: \"3bc3d821-db59-43d8-8de6-290509114a4a\") " pod="openstack/nova-cell0-db-create-wrv9l" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.043022 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf8wt\" (UniqueName: \"kubernetes.io/projected/8d63f54a-6505-499c-88db-dd3b04968076-kube-api-access-kf8wt\") pod \"nova-cell0-6cb2-account-create-update-x2mkl\" (UID: \"8d63f54a-6505-499c-88db-dd3b04968076\") " pod="openstack/nova-cell0-6cb2-account-create-update-x2mkl" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.044520 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bc3d821-db59-43d8-8de6-290509114a4a-operator-scripts\") pod \"nova-cell0-db-create-wrv9l\" (UID: \"3bc3d821-db59-43d8-8de6-290509114a4a\") " pod="openstack/nova-cell0-db-create-wrv9l" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.072570 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sxwk\" (UniqueName: \"kubernetes.io/projected/3bc3d821-db59-43d8-8de6-290509114a4a-kube-api-access-8sxwk\") pod \"nova-cell0-db-create-wrv9l\" (UID: \"3bc3d821-db59-43d8-8de6-290509114a4a\") " pod="openstack/nova-cell0-db-create-wrv9l" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.124431 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wrv9l" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.125441 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dacf-account-create-update-2vtrf" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.144507 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dksv\" (UniqueName: \"kubernetes.io/projected/4563dc5a-d5e3-41f7-a719-7eb1457bb219-kube-api-access-8dksv\") pod \"nova-cell1-db-create-lshmj\" (UID: \"4563dc5a-d5e3-41f7-a719-7eb1457bb219\") " pod="openstack/nova-cell1-db-create-lshmj" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.144568 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4563dc5a-d5e3-41f7-a719-7eb1457bb219-operator-scripts\") pod \"nova-cell1-db-create-lshmj\" (UID: \"4563dc5a-d5e3-41f7-a719-7eb1457bb219\") " pod="openstack/nova-cell1-db-create-lshmj" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.144601 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d63f54a-6505-499c-88db-dd3b04968076-operator-scripts\") pod \"nova-cell0-6cb2-account-create-update-x2mkl\" (UID: \"8d63f54a-6505-499c-88db-dd3b04968076\") " pod="openstack/nova-cell0-6cb2-account-create-update-x2mkl" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.144688 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf8wt\" (UniqueName: \"kubernetes.io/projected/8d63f54a-6505-499c-88db-dd3b04968076-kube-api-access-kf8wt\") pod \"nova-cell0-6cb2-account-create-update-x2mkl\" (UID: \"8d63f54a-6505-499c-88db-dd3b04968076\") " pod="openstack/nova-cell0-6cb2-account-create-update-x2mkl" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.145795 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d63f54a-6505-499c-88db-dd3b04968076-operator-scripts\") pod \"nova-cell0-6cb2-account-create-update-x2mkl\" (UID: \"8d63f54a-6505-499c-88db-dd3b04968076\") " pod="openstack/nova-cell0-6cb2-account-create-update-x2mkl" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.145854 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4563dc5a-d5e3-41f7-a719-7eb1457bb219-operator-scripts\") pod \"nova-cell1-db-create-lshmj\" (UID: \"4563dc5a-d5e3-41f7-a719-7eb1457bb219\") " pod="openstack/nova-cell1-db-create-lshmj" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.184527 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dksv\" (UniqueName: \"kubernetes.io/projected/4563dc5a-d5e3-41f7-a719-7eb1457bb219-kube-api-access-8dksv\") pod \"nova-cell1-db-create-lshmj\" (UID: \"4563dc5a-d5e3-41f7-a719-7eb1457bb219\") " pod="openstack/nova-cell1-db-create-lshmj" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.188493 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lshmj" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.188544 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf8wt\" (UniqueName: \"kubernetes.io/projected/8d63f54a-6505-499c-88db-dd3b04968076-kube-api-access-kf8wt\") pod \"nova-cell0-6cb2-account-create-update-x2mkl\" (UID: \"8d63f54a-6505-499c-88db-dd3b04968076\") " pod="openstack/nova-cell0-6cb2-account-create-update-x2mkl" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.191481 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9b49-account-create-update-xdbbq"] Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.193343 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9b49-account-create-update-xdbbq" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.198348 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9b49-account-create-update-xdbbq"] Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.198694 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.293563 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6cb2-account-create-update-x2mkl" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.349123 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tr5t\" (UniqueName: \"kubernetes.io/projected/b9f098d7-b30e-4612-aeaf-522f2706b053-kube-api-access-9tr5t\") pod \"nova-cell1-9b49-account-create-update-xdbbq\" (UID: \"b9f098d7-b30e-4612-aeaf-522f2706b053\") " pod="openstack/nova-cell1-9b49-account-create-update-xdbbq" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.349486 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9f098d7-b30e-4612-aeaf-522f2706b053-operator-scripts\") pod \"nova-cell1-9b49-account-create-update-xdbbq\" (UID: \"b9f098d7-b30e-4612-aeaf-522f2706b053\") " pod="openstack/nova-cell1-9b49-account-create-update-xdbbq" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.388404 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e618722b-4792-416d-8d97-abc07296800c","Type":"ContainerStarted","Data":"77dd1163dffc1d6896132e3e3bdb85959ecdd5a257762c108c72eae70d4f4aab"} Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.461637 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9f098d7-b30e-4612-aeaf-522f2706b053-operator-scripts\") pod \"nova-cell1-9b49-account-create-update-xdbbq\" (UID: \"b9f098d7-b30e-4612-aeaf-522f2706b053\") " pod="openstack/nova-cell1-9b49-account-create-update-xdbbq" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.461841 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tr5t\" (UniqueName: \"kubernetes.io/projected/b9f098d7-b30e-4612-aeaf-522f2706b053-kube-api-access-9tr5t\") pod \"nova-cell1-9b49-account-create-update-xdbbq\" (UID: \"b9f098d7-b30e-4612-aeaf-522f2706b053\") " pod="openstack/nova-cell1-9b49-account-create-update-xdbbq" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.463252 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9f098d7-b30e-4612-aeaf-522f2706b053-operator-scripts\") pod \"nova-cell1-9b49-account-create-update-xdbbq\" (UID: \"b9f098d7-b30e-4612-aeaf-522f2706b053\") " pod="openstack/nova-cell1-9b49-account-create-update-xdbbq" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.474643 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.501346 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tr5t\" (UniqueName: \"kubernetes.io/projected/b9f098d7-b30e-4612-aeaf-522f2706b053-kube-api-access-9tr5t\") pod \"nova-cell1-9b49-account-create-update-xdbbq\" (UID: \"b9f098d7-b30e-4612-aeaf-522f2706b053\") " pod="openstack/nova-cell1-9b49-account-create-update-xdbbq" Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.526738 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dnxzq"] Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.528086 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9b49-account-create-update-xdbbq" Jan 28 10:02:34 crc kubenswrapper[4735]: W0128 10:02:34.578094 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddedf2e75_1cf8_45be_984b_0fec5c06081d.slice/crio-2bc728a4b460becbbb353c4ea9e8c54f9e0f404ad504a5d4621e3939f28209c0 WatchSource:0}: Error finding container 2bc728a4b460becbbb353c4ea9e8c54f9e0f404ad504a5d4621e3939f28209c0: Status 404 returned error can't find the container with id 2bc728a4b460becbbb353c4ea9e8c54f9e0f404ad504a5d4621e3939f28209c0 Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.769572 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-dacf-account-create-update-2vtrf"] Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.875601 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wrv9l"] Jan 28 10:02:34 crc kubenswrapper[4735]: W0128 10:02:34.884934 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bc3d821_db59_43d8_8de6_290509114a4a.slice/crio-4eed0dfe1ba77aa4bcb2be4af140e667c3c23ce2787cc741be49cff43c944e28 WatchSource:0}: Error finding container 4eed0dfe1ba77aa4bcb2be4af140e667c3c23ce2787cc741be49cff43c944e28: Status 404 returned error can't find the container with id 4eed0dfe1ba77aa4bcb2be4af140e667c3c23ce2787cc741be49cff43c944e28 Jan 28 10:02:34 crc kubenswrapper[4735]: I0128 10:02:34.960386 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lshmj"] Jan 28 10:02:35 crc kubenswrapper[4735]: I0128 10:02:35.056637 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6cb2-account-create-update-x2mkl"] Jan 28 10:02:35 crc kubenswrapper[4735]: W0128 10:02:35.076050 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d63f54a_6505_499c_88db_dd3b04968076.slice/crio-e3aed769ad148c33a6ff944a672462a28e1397b0495ab95a6fc56bb4dbf559fa WatchSource:0}: Error finding container e3aed769ad148c33a6ff944a672462a28e1397b0495ab95a6fc56bb4dbf559fa: Status 404 returned error can't find the container with id e3aed769ad148c33a6ff944a672462a28e1397b0495ab95a6fc56bb4dbf559fa Jan 28 10:02:35 crc kubenswrapper[4735]: I0128 10:02:35.160417 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9b49-account-create-update-xdbbq"] Jan 28 10:02:35 crc kubenswrapper[4735]: I0128 10:02:35.417250 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f1d08a0a-642a-404a-a47a-4f98f7d211b0","Type":"ContainerStarted","Data":"4fd2c436c06d5b7755625ef63cace6c8eb7a6050e8149d2bf4fa58253cc6580e"} Jan 28 10:02:35 crc kubenswrapper[4735]: I0128 10:02:35.430462 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wrv9l" event={"ID":"3bc3d821-db59-43d8-8de6-290509114a4a","Type":"ContainerStarted","Data":"02412a12c8f09d8576e1d131c29ea47c640222a66a52a1dc58ec790f5712c481"} Jan 28 10:02:35 crc kubenswrapper[4735]: I0128 10:02:35.430492 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wrv9l" event={"ID":"3bc3d821-db59-43d8-8de6-290509114a4a","Type":"ContainerStarted","Data":"4eed0dfe1ba77aa4bcb2be4af140e667c3c23ce2787cc741be49cff43c944e28"} Jan 28 10:02:35 crc kubenswrapper[4735]: I0128 10:02:35.451353 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6cb2-account-create-update-x2mkl" event={"ID":"8d63f54a-6505-499c-88db-dd3b04968076","Type":"ContainerStarted","Data":"e3aed769ad148c33a6ff944a672462a28e1397b0495ab95a6fc56bb4dbf559fa"} Jan 28 10:02:35 crc kubenswrapper[4735]: I0128 10:02:35.456029 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lshmj" event={"ID":"4563dc5a-d5e3-41f7-a719-7eb1457bb219","Type":"ContainerStarted","Data":"a7149b3da04b56778ba24d6f24abc08f69d788e2b3c8ef4ba59e7b72c6d781a7"} Jan 28 10:02:35 crc kubenswrapper[4735]: I0128 10:02:35.456156 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lshmj" event={"ID":"4563dc5a-d5e3-41f7-a719-7eb1457bb219","Type":"ContainerStarted","Data":"18586092c0fc08133b80422caa032d3ffd788ffb8af036641eb83bccdc26003b"} Jan 28 10:02:35 crc kubenswrapper[4735]: I0128 10:02:35.456867 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-wrv9l" podStartSLOduration=2.456819918 podStartE2EDuration="2.456819918s" podCreationTimestamp="2026-01-28 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:02:35.442973471 +0000 UTC m=+1208.592637854" watchObservedRunningTime="2026-01-28 10:02:35.456819918 +0000 UTC m=+1208.606484291" Jan 28 10:02:35 crc kubenswrapper[4735]: I0128 10:02:35.461710 4735 generic.go:334] "Generic (PLEG): container finished" podID="dedf2e75-1cf8-45be-984b-0fec5c06081d" containerID="04017251c9110f6a42b949fa822d03c3973285e1143a9dd2813dbdd6ed43a473" exitCode=0 Jan 28 10:02:35 crc kubenswrapper[4735]: I0128 10:02:35.461845 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dnxzq" event={"ID":"dedf2e75-1cf8-45be-984b-0fec5c06081d","Type":"ContainerDied","Data":"04017251c9110f6a42b949fa822d03c3973285e1143a9dd2813dbdd6ed43a473"} Jan 28 10:02:35 crc kubenswrapper[4735]: I0128 10:02:35.461871 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dnxzq" event={"ID":"dedf2e75-1cf8-45be-984b-0fec5c06081d","Type":"ContainerStarted","Data":"2bc728a4b460becbbb353c4ea9e8c54f9e0f404ad504a5d4621e3939f28209c0"} Jan 28 10:02:35 crc kubenswrapper[4735]: I0128 10:02:35.477875 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dacf-account-create-update-2vtrf" event={"ID":"15c0163a-15b4-41e5-ba99-c1d296c31c34","Type":"ContainerStarted","Data":"673b5b614324c3abc57c20680c2865d50585019cc9431f3bc6a0a82e4beac5b7"} Jan 28 10:02:35 crc kubenswrapper[4735]: I0128 10:02:35.477918 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dacf-account-create-update-2vtrf" event={"ID":"15c0163a-15b4-41e5-ba99-c1d296c31c34","Type":"ContainerStarted","Data":"59bebc6f6e60907df99d42b6397d6f80a3516f75d0286537bb946935608d9291"} Jan 28 10:02:35 crc kubenswrapper[4735]: I0128 10:02:35.479159 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-lshmj" podStartSLOduration=2.479140138 podStartE2EDuration="2.479140138s" podCreationTimestamp="2026-01-28 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:02:35.473215724 +0000 UTC m=+1208.622880097" watchObservedRunningTime="2026-01-28 10:02:35.479140138 +0000 UTC m=+1208.628804511" Jan 28 10:02:35 crc kubenswrapper[4735]: I0128 10:02:35.534209 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-dacf-account-create-update-2vtrf" podStartSLOduration=2.534190388 podStartE2EDuration="2.534190388s" podCreationTimestamp="2026-01-28 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:02:35.522511724 +0000 UTC m=+1208.672176097" watchObservedRunningTime="2026-01-28 10:02:35.534190388 +0000 UTC m=+1208.683854761" Jan 28 10:02:35 crc kubenswrapper[4735]: E0128 10:02:35.568155 4735 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddedf2e75_1cf8_45be_984b_0fec5c06081d.slice/crio-04017251c9110f6a42b949fa822d03c3973285e1143a9dd2813dbdd6ed43a473.scope\": RecentStats: unable to find data in memory cache]" Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.509002 4735 generic.go:334] "Generic (PLEG): container finished" podID="4563dc5a-d5e3-41f7-a719-7eb1457bb219" containerID="a7149b3da04b56778ba24d6f24abc08f69d788e2b3c8ef4ba59e7b72c6d781a7" exitCode=0 Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.509555 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lshmj" event={"ID":"4563dc5a-d5e3-41f7-a719-7eb1457bb219","Type":"ContainerDied","Data":"a7149b3da04b56778ba24d6f24abc08f69d788e2b3c8ef4ba59e7b72c6d781a7"} Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.520304 4735 generic.go:334] "Generic (PLEG): container finished" podID="15c0163a-15b4-41e5-ba99-c1d296c31c34" containerID="673b5b614324c3abc57c20680c2865d50585019cc9431f3bc6a0a82e4beac5b7" exitCode=0 Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.520425 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dacf-account-create-update-2vtrf" event={"ID":"15c0163a-15b4-41e5-ba99-c1d296c31c34","Type":"ContainerDied","Data":"673b5b614324c3abc57c20680c2865d50585019cc9431f3bc6a0a82e4beac5b7"} Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.524341 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f1d08a0a-642a-404a-a47a-4f98f7d211b0","Type":"ContainerStarted","Data":"71b7eaa5c38bfb674f32dfa39bfb18df48b7d96923d853fe72aa34dce4adcf32"} Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.536466 4735 generic.go:334] "Generic (PLEG): container finished" podID="3bc3d821-db59-43d8-8de6-290509114a4a" containerID="02412a12c8f09d8576e1d131c29ea47c640222a66a52a1dc58ec790f5712c481" exitCode=0 Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.536547 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wrv9l" event={"ID":"3bc3d821-db59-43d8-8de6-290509114a4a","Type":"ContainerDied","Data":"02412a12c8f09d8576e1d131c29ea47c640222a66a52a1dc58ec790f5712c481"} Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.546592 4735 generic.go:334] "Generic (PLEG): container finished" podID="b9f098d7-b30e-4612-aeaf-522f2706b053" containerID="92f8b337df447f9facb2d511857802961202961143273836128a79c0c589cf13" exitCode=0 Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.546686 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9b49-account-create-update-xdbbq" event={"ID":"b9f098d7-b30e-4612-aeaf-522f2706b053","Type":"ContainerDied","Data":"92f8b337df447f9facb2d511857802961202961143273836128a79c0c589cf13"} Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.546718 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9b49-account-create-update-xdbbq" event={"ID":"b9f098d7-b30e-4612-aeaf-522f2706b053","Type":"ContainerStarted","Data":"e2e527efb0ba6d3450c190466f0166470ce57291306152da826e6193d734ce04"} Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.550294 4735 generic.go:334] "Generic (PLEG): container finished" podID="8d63f54a-6505-499c-88db-dd3b04968076" containerID="9746b8080e2af23eb02c552aab4c5f862b5f01055d86a9b94c03ddca827502f5" exitCode=0 Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.550375 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6cb2-account-create-update-x2mkl" event={"ID":"8d63f54a-6505-499c-88db-dd3b04968076","Type":"ContainerDied","Data":"9746b8080e2af23eb02c552aab4c5f862b5f01055d86a9b94c03ddca827502f5"} Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.568990 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e618722b-4792-416d-8d97-abc07296800c","Type":"ContainerStarted","Data":"47ee5474c3440842f07910856d83232c369271c83908d9fd406372e5a92d949c"} Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.569378 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.615935 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.531367473 podStartE2EDuration="6.615917085s" podCreationTimestamp="2026-01-28 10:02:30 +0000 UTC" firstStartedPulling="2026-01-28 10:02:31.449054566 +0000 UTC m=+1204.598718939" lastFinishedPulling="2026-01-28 10:02:35.533604178 +0000 UTC m=+1208.683268551" observedRunningTime="2026-01-28 10:02:36.615283453 +0000 UTC m=+1209.764947826" watchObservedRunningTime="2026-01-28 10:02:36.615917085 +0000 UTC m=+1209.765581458" Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.919546 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dnxzq" Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.923038 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7566c\" (UniqueName: \"kubernetes.io/projected/dedf2e75-1cf8-45be-984b-0fec5c06081d-kube-api-access-7566c\") pod \"dedf2e75-1cf8-45be-984b-0fec5c06081d\" (UID: \"dedf2e75-1cf8-45be-984b-0fec5c06081d\") " Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.923097 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedf2e75-1cf8-45be-984b-0fec5c06081d-operator-scripts\") pod \"dedf2e75-1cf8-45be-984b-0fec5c06081d\" (UID: \"dedf2e75-1cf8-45be-984b-0fec5c06081d\") " Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.923922 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dedf2e75-1cf8-45be-984b-0fec5c06081d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dedf2e75-1cf8-45be-984b-0fec5c06081d" (UID: "dedf2e75-1cf8-45be-984b-0fec5c06081d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:02:36 crc kubenswrapper[4735]: I0128 10:02:36.929966 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dedf2e75-1cf8-45be-984b-0fec5c06081d-kube-api-access-7566c" (OuterVolumeSpecName: "kube-api-access-7566c") pod "dedf2e75-1cf8-45be-984b-0fec5c06081d" (UID: "dedf2e75-1cf8-45be-984b-0fec5c06081d"). InnerVolumeSpecName "kube-api-access-7566c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:02:37 crc kubenswrapper[4735]: I0128 10:02:37.025454 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7566c\" (UniqueName: \"kubernetes.io/projected/dedf2e75-1cf8-45be-984b-0fec5c06081d-kube-api-access-7566c\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:37 crc kubenswrapper[4735]: I0128 10:02:37.025493 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedf2e75-1cf8-45be-984b-0fec5c06081d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:37 crc kubenswrapper[4735]: I0128 10:02:37.579327 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f1d08a0a-642a-404a-a47a-4f98f7d211b0","Type":"ContainerStarted","Data":"a5ab12fa3658d3d0216b72de52d7bdfc950cfb27c769736f9cdc966af507f910"} Jan 28 10:02:37 crc kubenswrapper[4735]: I0128 10:02:37.583268 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dnxzq" Jan 28 10:02:37 crc kubenswrapper[4735]: I0128 10:02:37.583911 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dnxzq" event={"ID":"dedf2e75-1cf8-45be-984b-0fec5c06081d","Type":"ContainerDied","Data":"2bc728a4b460becbbb353c4ea9e8c54f9e0f404ad504a5d4621e3939f28209c0"} Jan 28 10:02:37 crc kubenswrapper[4735]: I0128 10:02:37.583937 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bc728a4b460becbbb353c4ea9e8c54f9e0f404ad504a5d4621e3939f28209c0" Jan 28 10:02:37 crc kubenswrapper[4735]: I0128 10:02:37.617658 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.617635478 podStartE2EDuration="4.617635478s" podCreationTimestamp="2026-01-28 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:02:37.61249487 +0000 UTC m=+1210.762159253" watchObservedRunningTime="2026-01-28 10:02:37.617635478 +0000 UTC m=+1210.767299851" Jan 28 10:02:37 crc kubenswrapper[4735]: I0128 10:02:37.996130 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dacf-account-create-update-2vtrf" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.148262 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c0163a-15b4-41e5-ba99-c1d296c31c34-operator-scripts\") pod \"15c0163a-15b4-41e5-ba99-c1d296c31c34\" (UID: \"15c0163a-15b4-41e5-ba99-c1d296c31c34\") " Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.148315 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7m7p\" (UniqueName: \"kubernetes.io/projected/15c0163a-15b4-41e5-ba99-c1d296c31c34-kube-api-access-p7m7p\") pod \"15c0163a-15b4-41e5-ba99-c1d296c31c34\" (UID: \"15c0163a-15b4-41e5-ba99-c1d296c31c34\") " Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.152279 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15c0163a-15b4-41e5-ba99-c1d296c31c34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15c0163a-15b4-41e5-ba99-c1d296c31c34" (UID: "15c0163a-15b4-41e5-ba99-c1d296c31c34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.153700 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c0163a-15b4-41e5-ba99-c1d296c31c34-kube-api-access-p7m7p" (OuterVolumeSpecName: "kube-api-access-p7m7p") pod "15c0163a-15b4-41e5-ba99-c1d296c31c34" (UID: "15c0163a-15b4-41e5-ba99-c1d296c31c34"). InnerVolumeSpecName "kube-api-access-p7m7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.192263 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.192499 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" containerName="glance-log" containerID="cri-o://0031570d04af9d6dc91a60348887e2de0a3465aebf61536eac76760caf0fef91" gracePeriod=30 Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.192581 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" containerName="glance-httpd" containerID="cri-o://3a8c84148521a148189f8b6770c920d62c1e943632aa62f730ef618dff1acc36" gracePeriod=30 Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.250488 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c0163a-15b4-41e5-ba99-c1d296c31c34-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.250527 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7m7p\" (UniqueName: \"kubernetes.io/projected/15c0163a-15b4-41e5-ba99-c1d296c31c34-kube-api-access-p7m7p\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.264439 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lshmj" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.275556 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wrv9l" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.287403 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9b49-account-create-update-xdbbq" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.289042 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6cb2-account-create-update-x2mkl" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.453338 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9f098d7-b30e-4612-aeaf-522f2706b053-operator-scripts\") pod \"b9f098d7-b30e-4612-aeaf-522f2706b053\" (UID: \"b9f098d7-b30e-4612-aeaf-522f2706b053\") " Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.453483 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sxwk\" (UniqueName: \"kubernetes.io/projected/3bc3d821-db59-43d8-8de6-290509114a4a-kube-api-access-8sxwk\") pod \"3bc3d821-db59-43d8-8de6-290509114a4a\" (UID: \"3bc3d821-db59-43d8-8de6-290509114a4a\") " Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.453510 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf8wt\" (UniqueName: \"kubernetes.io/projected/8d63f54a-6505-499c-88db-dd3b04968076-kube-api-access-kf8wt\") pod \"8d63f54a-6505-499c-88db-dd3b04968076\" (UID: \"8d63f54a-6505-499c-88db-dd3b04968076\") " Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.453575 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dksv\" (UniqueName: \"kubernetes.io/projected/4563dc5a-d5e3-41f7-a719-7eb1457bb219-kube-api-access-8dksv\") pod \"4563dc5a-d5e3-41f7-a719-7eb1457bb219\" (UID: \"4563dc5a-d5e3-41f7-a719-7eb1457bb219\") " Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.453649 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tr5t\" (UniqueName: \"kubernetes.io/projected/b9f098d7-b30e-4612-aeaf-522f2706b053-kube-api-access-9tr5t\") pod \"b9f098d7-b30e-4612-aeaf-522f2706b053\" (UID: \"b9f098d7-b30e-4612-aeaf-522f2706b053\") " Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.453713 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d63f54a-6505-499c-88db-dd3b04968076-operator-scripts\") pod \"8d63f54a-6505-499c-88db-dd3b04968076\" (UID: \"8d63f54a-6505-499c-88db-dd3b04968076\") " Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.453767 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4563dc5a-d5e3-41f7-a719-7eb1457bb219-operator-scripts\") pod \"4563dc5a-d5e3-41f7-a719-7eb1457bb219\" (UID: \"4563dc5a-d5e3-41f7-a719-7eb1457bb219\") " Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.453840 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bc3d821-db59-43d8-8de6-290509114a4a-operator-scripts\") pod \"3bc3d821-db59-43d8-8de6-290509114a4a\" (UID: \"3bc3d821-db59-43d8-8de6-290509114a4a\") " Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.454271 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9f098d7-b30e-4612-aeaf-522f2706b053-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9f098d7-b30e-4612-aeaf-522f2706b053" (UID: "b9f098d7-b30e-4612-aeaf-522f2706b053"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.454457 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d63f54a-6505-499c-88db-dd3b04968076-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8d63f54a-6505-499c-88db-dd3b04968076" (UID: "8d63f54a-6505-499c-88db-dd3b04968076"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.454626 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4563dc5a-d5e3-41f7-a719-7eb1457bb219-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4563dc5a-d5e3-41f7-a719-7eb1457bb219" (UID: "4563dc5a-d5e3-41f7-a719-7eb1457bb219"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.454666 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bc3d821-db59-43d8-8de6-290509114a4a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bc3d821-db59-43d8-8de6-290509114a4a" (UID: "3bc3d821-db59-43d8-8de6-290509114a4a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.464131 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4563dc5a-d5e3-41f7-a719-7eb1457bb219-kube-api-access-8dksv" (OuterVolumeSpecName: "kube-api-access-8dksv") pod "4563dc5a-d5e3-41f7-a719-7eb1457bb219" (UID: "4563dc5a-d5e3-41f7-a719-7eb1457bb219"). InnerVolumeSpecName "kube-api-access-8dksv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.464210 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bc3d821-db59-43d8-8de6-290509114a4a-kube-api-access-8sxwk" (OuterVolumeSpecName: "kube-api-access-8sxwk") pod "3bc3d821-db59-43d8-8de6-290509114a4a" (UID: "3bc3d821-db59-43d8-8de6-290509114a4a"). InnerVolumeSpecName "kube-api-access-8sxwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.464277 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9f098d7-b30e-4612-aeaf-522f2706b053-kube-api-access-9tr5t" (OuterVolumeSpecName: "kube-api-access-9tr5t") pod "b9f098d7-b30e-4612-aeaf-522f2706b053" (UID: "b9f098d7-b30e-4612-aeaf-522f2706b053"). InnerVolumeSpecName "kube-api-access-9tr5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.464293 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d63f54a-6505-499c-88db-dd3b04968076-kube-api-access-kf8wt" (OuterVolumeSpecName: "kube-api-access-kf8wt") pod "8d63f54a-6505-499c-88db-dd3b04968076" (UID: "8d63f54a-6505-499c-88db-dd3b04968076"). InnerVolumeSpecName "kube-api-access-kf8wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.556060 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tr5t\" (UniqueName: \"kubernetes.io/projected/b9f098d7-b30e-4612-aeaf-522f2706b053-kube-api-access-9tr5t\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.556357 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d63f54a-6505-499c-88db-dd3b04968076-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.556373 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4563dc5a-d5e3-41f7-a719-7eb1457bb219-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.556388 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bc3d821-db59-43d8-8de6-290509114a4a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.556400 4735 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9f098d7-b30e-4612-aeaf-522f2706b053-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.556412 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sxwk\" (UniqueName: \"kubernetes.io/projected/3bc3d821-db59-43d8-8de6-290509114a4a-kube-api-access-8sxwk\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.556423 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf8wt\" (UniqueName: \"kubernetes.io/projected/8d63f54a-6505-499c-88db-dd3b04968076-kube-api-access-kf8wt\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.556435 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dksv\" (UniqueName: \"kubernetes.io/projected/4563dc5a-d5e3-41f7-a719-7eb1457bb219-kube-api-access-8dksv\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.602197 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lshmj" event={"ID":"4563dc5a-d5e3-41f7-a719-7eb1457bb219","Type":"ContainerDied","Data":"18586092c0fc08133b80422caa032d3ffd788ffb8af036641eb83bccdc26003b"} Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.602242 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18586092c0fc08133b80422caa032d3ffd788ffb8af036641eb83bccdc26003b" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.602346 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lshmj" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.604527 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dacf-account-create-update-2vtrf" event={"ID":"15c0163a-15b4-41e5-ba99-c1d296c31c34","Type":"ContainerDied","Data":"59bebc6f6e60907df99d42b6397d6f80a3516f75d0286537bb946935608d9291"} Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.604568 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59bebc6f6e60907df99d42b6397d6f80a3516f75d0286537bb946935608d9291" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.604649 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dacf-account-create-update-2vtrf" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.607349 4735 generic.go:334] "Generic (PLEG): container finished" podID="78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" containerID="0031570d04af9d6dc91a60348887e2de0a3465aebf61536eac76760caf0fef91" exitCode=143 Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.607465 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc","Type":"ContainerDied","Data":"0031570d04af9d6dc91a60348887e2de0a3465aebf61536eac76760caf0fef91"} Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.633659 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wrv9l" event={"ID":"3bc3d821-db59-43d8-8de6-290509114a4a","Type":"ContainerDied","Data":"4eed0dfe1ba77aa4bcb2be4af140e667c3c23ce2787cc741be49cff43c944e28"} Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.633695 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eed0dfe1ba77aa4bcb2be4af140e667c3c23ce2787cc741be49cff43c944e28" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.633769 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wrv9l" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.636417 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9b49-account-create-update-xdbbq" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.636421 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9b49-account-create-update-xdbbq" event={"ID":"b9f098d7-b30e-4612-aeaf-522f2706b053","Type":"ContainerDied","Data":"e2e527efb0ba6d3450c190466f0166470ce57291306152da826e6193d734ce04"} Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.636549 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2e527efb0ba6d3450c190466f0166470ce57291306152da826e6193d734ce04" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.638619 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6cb2-account-create-update-x2mkl" Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.639103 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6cb2-account-create-update-x2mkl" event={"ID":"8d63f54a-6505-499c-88db-dd3b04968076","Type":"ContainerDied","Data":"e3aed769ad148c33a6ff944a672462a28e1397b0495ab95a6fc56bb4dbf559fa"} Jan 28 10:02:38 crc kubenswrapper[4735]: I0128 10:02:38.639155 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3aed769ad148c33a6ff944a672462a28e1397b0495ab95a6fc56bb4dbf559fa" Jan 28 10:02:41 crc kubenswrapper[4735]: I0128 10:02:41.670570 4735 generic.go:334] "Generic (PLEG): container finished" podID="78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" containerID="3a8c84148521a148189f8b6770c920d62c1e943632aa62f730ef618dff1acc36" exitCode=0 Jan 28 10:02:41 crc kubenswrapper[4735]: I0128 10:02:41.671154 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc","Type":"ContainerDied","Data":"3a8c84148521a148189f8b6770c920d62c1e943632aa62f730ef618dff1acc36"} Jan 28 10:02:41 crc kubenswrapper[4735]: I0128 10:02:41.924295 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 10:02:41 crc kubenswrapper[4735]: I0128 10:02:41.934406 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-scripts\") pod \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " Jan 28 10:02:41 crc kubenswrapper[4735]: I0128 10:02:41.934524 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-config-data\") pod \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " Jan 28 10:02:41 crc kubenswrapper[4735]: I0128 10:02:41.934608 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-public-tls-certs\") pod \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " Jan 28 10:02:41 crc kubenswrapper[4735]: I0128 10:02:41.934644 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-httpd-run\") pod \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " Jan 28 10:02:41 crc kubenswrapper[4735]: I0128 10:02:41.934722 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-logs\") pod \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " Jan 28 10:02:41 crc kubenswrapper[4735]: I0128 10:02:41.934798 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cptm8\" (UniqueName: \"kubernetes.io/projected/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-kube-api-access-cptm8\") pod \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " Jan 28 10:02:41 crc kubenswrapper[4735]: I0128 10:02:41.934830 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-combined-ca-bundle\") pod \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " Jan 28 10:02:41 crc kubenswrapper[4735]: I0128 10:02:41.934853 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\" (UID: \"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc\") " Jan 28 10:02:41 crc kubenswrapper[4735]: I0128 10:02:41.935191 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-logs" (OuterVolumeSpecName: "logs") pod "78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" (UID: "78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:02:41 crc kubenswrapper[4735]: I0128 10:02:41.935538 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:41 crc kubenswrapper[4735]: I0128 10:02:41.935871 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" (UID: "78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:02:41 crc kubenswrapper[4735]: I0128 10:02:41.944984 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-kube-api-access-cptm8" (OuterVolumeSpecName: "kube-api-access-cptm8") pod "78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" (UID: "78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc"). InnerVolumeSpecName "kube-api-access-cptm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:02:41 crc kubenswrapper[4735]: I0128 10:02:41.947034 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" (UID: "78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 10:02:41 crc kubenswrapper[4735]: I0128 10:02:41.958992 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-scripts" (OuterVolumeSpecName: "scripts") pod "78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" (UID: "78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.016034 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" (UID: "78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.019949 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" (UID: "78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.037693 4735 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.037763 4735 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.037780 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cptm8\" (UniqueName: \"kubernetes.io/projected/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-kube-api-access-cptm8\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.037795 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.037830 4735 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.037842 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.064496 4735 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.083908 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-config-data" (OuterVolumeSpecName: "config-data") pod "78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" (UID: "78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.138850 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.138880 4735 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.682158 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc","Type":"ContainerDied","Data":"8bed4ee2559b9ae046d792f0bf2f2a4b8d69aa2f9e66a4e4edc2d377810d94e9"} Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.683152 4735 scope.go:117] "RemoveContainer" containerID="3a8c84148521a148189f8b6770c920d62c1e943632aa62f730ef618dff1acc36" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.682243 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.718986 4735 scope.go:117] "RemoveContainer" containerID="0031570d04af9d6dc91a60348887e2de0a3465aebf61536eac76760caf0fef91" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.735444 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.745964 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.760342 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:02:42 crc kubenswrapper[4735]: E0128 10:02:42.760843 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9f098d7-b30e-4612-aeaf-522f2706b053" containerName="mariadb-account-create-update" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.760865 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9f098d7-b30e-4612-aeaf-522f2706b053" containerName="mariadb-account-create-update" Jan 28 10:02:42 crc kubenswrapper[4735]: E0128 10:02:42.760882 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedf2e75-1cf8-45be-984b-0fec5c06081d" containerName="mariadb-database-create" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.760891 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedf2e75-1cf8-45be-984b-0fec5c06081d" containerName="mariadb-database-create" Jan 28 10:02:42 crc kubenswrapper[4735]: E0128 10:02:42.760910 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" containerName="glance-httpd" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.760919 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" containerName="glance-httpd" Jan 28 10:02:42 crc kubenswrapper[4735]: E0128 10:02:42.760932 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc3d821-db59-43d8-8de6-290509114a4a" containerName="mariadb-database-create" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.760954 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc3d821-db59-43d8-8de6-290509114a4a" containerName="mariadb-database-create" Jan 28 10:02:42 crc kubenswrapper[4735]: E0128 10:02:42.760984 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c0163a-15b4-41e5-ba99-c1d296c31c34" containerName="mariadb-account-create-update" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.760993 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c0163a-15b4-41e5-ba99-c1d296c31c34" containerName="mariadb-account-create-update" Jan 28 10:02:42 crc kubenswrapper[4735]: E0128 10:02:42.761009 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" containerName="glance-log" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.761017 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" containerName="glance-log" Jan 28 10:02:42 crc kubenswrapper[4735]: E0128 10:02:42.761036 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d63f54a-6505-499c-88db-dd3b04968076" containerName="mariadb-account-create-update" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.761044 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d63f54a-6505-499c-88db-dd3b04968076" containerName="mariadb-account-create-update" Jan 28 10:02:42 crc kubenswrapper[4735]: E0128 10:02:42.761058 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4563dc5a-d5e3-41f7-a719-7eb1457bb219" containerName="mariadb-database-create" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.761067 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="4563dc5a-d5e3-41f7-a719-7eb1457bb219" containerName="mariadb-database-create" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.761272 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bc3d821-db59-43d8-8de6-290509114a4a" containerName="mariadb-database-create" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.761289 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c0163a-15b4-41e5-ba99-c1d296c31c34" containerName="mariadb-account-create-update" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.761301 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" containerName="glance-log" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.761317 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" containerName="glance-httpd" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.761335 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d63f54a-6505-499c-88db-dd3b04968076" containerName="mariadb-account-create-update" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.761348 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="dedf2e75-1cf8-45be-984b-0fec5c06081d" containerName="mariadb-database-create" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.761360 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="4563dc5a-d5e3-41f7-a719-7eb1457bb219" containerName="mariadb-database-create" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.761376 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9f098d7-b30e-4612-aeaf-522f2706b053" containerName="mariadb-account-create-update" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.762672 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.764557 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.764900 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.773708 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.952436 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-logs\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.952523 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.952584 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj4ls\" (UniqueName: \"kubernetes.io/projected/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-kube-api-access-pj4ls\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.952733 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.952886 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.952920 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.952957 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:42 crc kubenswrapper[4735]: I0128 10:02:42.953178 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.055388 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4ls\" (UniqueName: \"kubernetes.io/projected/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-kube-api-access-pj4ls\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.055488 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.055534 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.055560 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.055588 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.055654 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.055677 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-logs\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.055711 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.057296 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.057564 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.057648 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-logs\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.062001 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.063527 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.065469 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.065889 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.078665 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj4ls\" (UniqueName: \"kubernetes.io/projected/e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6-kube-api-access-pj4ls\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.093220 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6\") " pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.099456 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.372634 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.373722 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e618722b-4792-416d-8d97-abc07296800c" containerName="ceilometer-central-agent" containerID="cri-o://285ef7e4aeef6c7e4cac82847002bbc6583fc4a9c761cdc0790fc94d707c04ab" gracePeriod=30 Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.373780 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e618722b-4792-416d-8d97-abc07296800c" containerName="sg-core" containerID="cri-o://77dd1163dffc1d6896132e3e3bdb85959ecdd5a257762c108c72eae70d4f4aab" gracePeriod=30 Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.373791 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e618722b-4792-416d-8d97-abc07296800c" containerName="proxy-httpd" containerID="cri-o://47ee5474c3440842f07910856d83232c369271c83908d9fd406372e5a92d949c" gracePeriod=30 Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.373872 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e618722b-4792-416d-8d97-abc07296800c" containerName="ceilometer-notification-agent" containerID="cri-o://6ed61d0e32f911ebab66d1e90e477b6a84adfe78030d19bebb080c8279e0d730" gracePeriod=30 Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.524515 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc" path="/var/lib/kubelet/pods/78fa8c7f-8838-42e8-8f6d-ce1f2c3edebc/volumes" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.679166 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 10:02:43 crc kubenswrapper[4735]: W0128 10:02:43.681633 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9a4ff23_cbc6_4505_a6ae_f6094bfbeac6.slice/crio-4b456931fac3a58a243588c338edbf021d625f4084f2de531d0fcbd874013044 WatchSource:0}: Error finding container 4b456931fac3a58a243588c338edbf021d625f4084f2de531d0fcbd874013044: Status 404 returned error can't find the container with id 4b456931fac3a58a243588c338edbf021d625f4084f2de531d0fcbd874013044 Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.696273 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6","Type":"ContainerStarted","Data":"4b456931fac3a58a243588c338edbf021d625f4084f2de531d0fcbd874013044"} Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.700076 4735 generic.go:334] "Generic (PLEG): container finished" podID="e618722b-4792-416d-8d97-abc07296800c" containerID="47ee5474c3440842f07910856d83232c369271c83908d9fd406372e5a92d949c" exitCode=0 Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.700098 4735 generic.go:334] "Generic (PLEG): container finished" podID="e618722b-4792-416d-8d97-abc07296800c" containerID="77dd1163dffc1d6896132e3e3bdb85959ecdd5a257762c108c72eae70d4f4aab" exitCode=2 Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.700114 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e618722b-4792-416d-8d97-abc07296800c","Type":"ContainerDied","Data":"47ee5474c3440842f07910856d83232c369271c83908d9fd406372e5a92d949c"} Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.700130 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e618722b-4792-416d-8d97-abc07296800c","Type":"ContainerDied","Data":"77dd1163dffc1d6896132e3e3bdb85959ecdd5a257762c108c72eae70d4f4aab"} Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.714579 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.714640 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.747662 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 10:02:43 crc kubenswrapper[4735]: I0128 10:02:43.762146 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.280191 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-p8gnh"] Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.282424 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-p8gnh" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.288007 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.288007 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-w28c6" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.288281 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.301963 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-p8gnh"] Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.381356 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-config-data\") pod \"nova-cell0-conductor-db-sync-p8gnh\" (UID: \"71e767c2-123a-497e-83b8-9fe0b8f45c77\") " pod="openstack/nova-cell0-conductor-db-sync-p8gnh" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.381482 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c49zl\" (UniqueName: \"kubernetes.io/projected/71e767c2-123a-497e-83b8-9fe0b8f45c77-kube-api-access-c49zl\") pod \"nova-cell0-conductor-db-sync-p8gnh\" (UID: \"71e767c2-123a-497e-83b8-9fe0b8f45c77\") " pod="openstack/nova-cell0-conductor-db-sync-p8gnh" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.381525 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-p8gnh\" (UID: \"71e767c2-123a-497e-83b8-9fe0b8f45c77\") " pod="openstack/nova-cell0-conductor-db-sync-p8gnh" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.381628 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-scripts\") pod \"nova-cell0-conductor-db-sync-p8gnh\" (UID: \"71e767c2-123a-497e-83b8-9fe0b8f45c77\") " pod="openstack/nova-cell0-conductor-db-sync-p8gnh" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.483538 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-p8gnh\" (UID: \"71e767c2-123a-497e-83b8-9fe0b8f45c77\") " pod="openstack/nova-cell0-conductor-db-sync-p8gnh" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.483635 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-scripts\") pod \"nova-cell0-conductor-db-sync-p8gnh\" (UID: \"71e767c2-123a-497e-83b8-9fe0b8f45c77\") " pod="openstack/nova-cell0-conductor-db-sync-p8gnh" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.483679 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-config-data\") pod \"nova-cell0-conductor-db-sync-p8gnh\" (UID: \"71e767c2-123a-497e-83b8-9fe0b8f45c77\") " pod="openstack/nova-cell0-conductor-db-sync-p8gnh" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.483802 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c49zl\" (UniqueName: \"kubernetes.io/projected/71e767c2-123a-497e-83b8-9fe0b8f45c77-kube-api-access-c49zl\") pod \"nova-cell0-conductor-db-sync-p8gnh\" (UID: \"71e767c2-123a-497e-83b8-9fe0b8f45c77\") " pod="openstack/nova-cell0-conductor-db-sync-p8gnh" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.487846 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-scripts\") pod \"nova-cell0-conductor-db-sync-p8gnh\" (UID: \"71e767c2-123a-497e-83b8-9fe0b8f45c77\") " pod="openstack/nova-cell0-conductor-db-sync-p8gnh" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.487901 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-config-data\") pod \"nova-cell0-conductor-db-sync-p8gnh\" (UID: \"71e767c2-123a-497e-83b8-9fe0b8f45c77\") " pod="openstack/nova-cell0-conductor-db-sync-p8gnh" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.488522 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-p8gnh\" (UID: \"71e767c2-123a-497e-83b8-9fe0b8f45c77\") " pod="openstack/nova-cell0-conductor-db-sync-p8gnh" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.506460 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c49zl\" (UniqueName: \"kubernetes.io/projected/71e767c2-123a-497e-83b8-9fe0b8f45c77-kube-api-access-c49zl\") pod \"nova-cell0-conductor-db-sync-p8gnh\" (UID: \"71e767c2-123a-497e-83b8-9fe0b8f45c77\") " pod="openstack/nova-cell0-conductor-db-sync-p8gnh" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.607952 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-p8gnh" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.761814 4735 generic.go:334] "Generic (PLEG): container finished" podID="e618722b-4792-416d-8d97-abc07296800c" containerID="6ed61d0e32f911ebab66d1e90e477b6a84adfe78030d19bebb080c8279e0d730" exitCode=0 Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.762162 4735 generic.go:334] "Generic (PLEG): container finished" podID="e618722b-4792-416d-8d97-abc07296800c" containerID="285ef7e4aeef6c7e4cac82847002bbc6583fc4a9c761cdc0790fc94d707c04ab" exitCode=0 Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.761900 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e618722b-4792-416d-8d97-abc07296800c","Type":"ContainerDied","Data":"6ed61d0e32f911ebab66d1e90e477b6a84adfe78030d19bebb080c8279e0d730"} Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.762215 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e618722b-4792-416d-8d97-abc07296800c","Type":"ContainerDied","Data":"285ef7e4aeef6c7e4cac82847002bbc6583fc4a9c761cdc0790fc94d707c04ab"} Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.771308 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6","Type":"ContainerStarted","Data":"74f63e329bbab61c6bbca4cc61ae795934b703fd7860ea4dba6f2a837d13a028"} Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.771375 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.772631 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 10:02:44 crc kubenswrapper[4735]: I0128 10:02:44.909641 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.013441 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-sg-core-conf-yaml\") pod \"e618722b-4792-416d-8d97-abc07296800c\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.013947 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnfdq\" (UniqueName: \"kubernetes.io/projected/e618722b-4792-416d-8d97-abc07296800c-kube-api-access-hnfdq\") pod \"e618722b-4792-416d-8d97-abc07296800c\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.014012 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-combined-ca-bundle\") pod \"e618722b-4792-416d-8d97-abc07296800c\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.014056 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-config-data\") pod \"e618722b-4792-416d-8d97-abc07296800c\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.014107 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e618722b-4792-416d-8d97-abc07296800c-log-httpd\") pod \"e618722b-4792-416d-8d97-abc07296800c\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.014145 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e618722b-4792-416d-8d97-abc07296800c-run-httpd\") pod \"e618722b-4792-416d-8d97-abc07296800c\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.014190 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-scripts\") pod \"e618722b-4792-416d-8d97-abc07296800c\" (UID: \"e618722b-4792-416d-8d97-abc07296800c\") " Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.015399 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e618722b-4792-416d-8d97-abc07296800c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e618722b-4792-416d-8d97-abc07296800c" (UID: "e618722b-4792-416d-8d97-abc07296800c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.015904 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e618722b-4792-416d-8d97-abc07296800c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e618722b-4792-416d-8d97-abc07296800c" (UID: "e618722b-4792-416d-8d97-abc07296800c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.021712 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-scripts" (OuterVolumeSpecName: "scripts") pod "e618722b-4792-416d-8d97-abc07296800c" (UID: "e618722b-4792-416d-8d97-abc07296800c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.022047 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e618722b-4792-416d-8d97-abc07296800c-kube-api-access-hnfdq" (OuterVolumeSpecName: "kube-api-access-hnfdq") pod "e618722b-4792-416d-8d97-abc07296800c" (UID: "e618722b-4792-416d-8d97-abc07296800c"). InnerVolumeSpecName "kube-api-access-hnfdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.050105 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e618722b-4792-416d-8d97-abc07296800c" (UID: "e618722b-4792-416d-8d97-abc07296800c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.117171 4735 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e618722b-4792-416d-8d97-abc07296800c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.117635 4735 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e618722b-4792-416d-8d97-abc07296800c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.117727 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.117830 4735 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.117938 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnfdq\" (UniqueName: \"kubernetes.io/projected/e618722b-4792-416d-8d97-abc07296800c-kube-api-access-hnfdq\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.123588 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e618722b-4792-416d-8d97-abc07296800c" (UID: "e618722b-4792-416d-8d97-abc07296800c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.126999 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-config-data" (OuterVolumeSpecName: "config-data") pod "e618722b-4792-416d-8d97-abc07296800c" (UID: "e618722b-4792-416d-8d97-abc07296800c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:02:45 crc kubenswrapper[4735]: W0128 10:02:45.152814 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71e767c2_123a_497e_83b8_9fe0b8f45c77.slice/crio-3d8eedf91e4dd1fe7cd2b33eea52a94a77d796b1c5909f52c19a3dd356b51082 WatchSource:0}: Error finding container 3d8eedf91e4dd1fe7cd2b33eea52a94a77d796b1c5909f52c19a3dd356b51082: Status 404 returned error can't find the container with id 3d8eedf91e4dd1fe7cd2b33eea52a94a77d796b1c5909f52c19a3dd356b51082 Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.154222 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-p8gnh"] Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.220097 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.220160 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e618722b-4792-416d-8d97-abc07296800c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.779412 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-p8gnh" event={"ID":"71e767c2-123a-497e-83b8-9fe0b8f45c77","Type":"ContainerStarted","Data":"3d8eedf91e4dd1fe7cd2b33eea52a94a77d796b1c5909f52c19a3dd356b51082"} Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.782505 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e618722b-4792-416d-8d97-abc07296800c","Type":"ContainerDied","Data":"76b5866571a25b1c05a6162a7fd6271cf591fc0b0a43432d1b846ded81d41fd2"} Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.782537 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.782572 4735 scope.go:117] "RemoveContainer" containerID="47ee5474c3440842f07910856d83232c369271c83908d9fd406372e5a92d949c" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.786410 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6","Type":"ContainerStarted","Data":"f6144e839efc57282dc41ff6df5af48d3e6791c805c3626fad6bcaea511b74d5"} Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.812180 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.828329 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.850336 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:45 crc kubenswrapper[4735]: E0128 10:02:45.850861 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e618722b-4792-416d-8d97-abc07296800c" containerName="proxy-httpd" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.850885 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e618722b-4792-416d-8d97-abc07296800c" containerName="proxy-httpd" Jan 28 10:02:45 crc kubenswrapper[4735]: E0128 10:02:45.850900 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e618722b-4792-416d-8d97-abc07296800c" containerName="ceilometer-notification-agent" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.850908 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e618722b-4792-416d-8d97-abc07296800c" containerName="ceilometer-notification-agent" Jan 28 10:02:45 crc kubenswrapper[4735]: E0128 10:02:45.850919 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e618722b-4792-416d-8d97-abc07296800c" containerName="ceilometer-central-agent" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.850926 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e618722b-4792-416d-8d97-abc07296800c" containerName="ceilometer-central-agent" Jan 28 10:02:45 crc kubenswrapper[4735]: E0128 10:02:45.850944 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e618722b-4792-416d-8d97-abc07296800c" containerName="sg-core" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.850950 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e618722b-4792-416d-8d97-abc07296800c" containerName="sg-core" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.850929 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.8509097199999998 podStartE2EDuration="3.85090972s" podCreationTimestamp="2026-01-28 10:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:02:45.83379414 +0000 UTC m=+1218.983458513" watchObservedRunningTime="2026-01-28 10:02:45.85090972 +0000 UTC m=+1219.000574093" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.851119 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e618722b-4792-416d-8d97-abc07296800c" containerName="ceilometer-central-agent" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.851132 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e618722b-4792-416d-8d97-abc07296800c" containerName="ceilometer-notification-agent" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.851151 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e618722b-4792-416d-8d97-abc07296800c" containerName="proxy-httpd" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.851161 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e618722b-4792-416d-8d97-abc07296800c" containerName="sg-core" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.853288 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.858562 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.858561 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.868621 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.882528 4735 scope.go:117] "RemoveContainer" containerID="77dd1163dffc1d6896132e3e3bdb85959ecdd5a257762c108c72eae70d4f4aab" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.906950 4735 scope.go:117] "RemoveContainer" containerID="6ed61d0e32f911ebab66d1e90e477b6a84adfe78030d19bebb080c8279e0d730" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.935200 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.935339 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-scripts\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.935665 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc9tf\" (UniqueName: \"kubernetes.io/projected/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-kube-api-access-zc9tf\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.935702 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-run-httpd\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.935796 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-log-httpd\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.935907 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-config-data\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.937075 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:45 crc kubenswrapper[4735]: I0128 10:02:45.941634 4735 scope.go:117] "RemoveContainer" containerID="285ef7e4aeef6c7e4cac82847002bbc6583fc4a9c761cdc0790fc94d707c04ab" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.038405 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.038458 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-scripts\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.038534 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc9tf\" (UniqueName: \"kubernetes.io/projected/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-kube-api-access-zc9tf\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.038554 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-run-httpd\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.038573 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-log-httpd\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.038595 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-config-data\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.038627 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.039394 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-run-httpd\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.039108 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-log-httpd\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.044319 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.044844 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-scripts\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.044908 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-config-data\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.045225 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.059055 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc9tf\" (UniqueName: \"kubernetes.io/projected/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-kube-api-access-zc9tf\") pod \"ceilometer-0\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " pod="openstack/ceilometer-0" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.191000 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.672965 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.798040 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b5ceddc-88b4-4f42-80a4-0180dc880b6d","Type":"ContainerStarted","Data":"9cae8f5250723d2c120b5dbe7cb2f95e68cbbaf56122bfd1cfacb66f77933873"} Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.798111 4735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.798121 4735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.943402 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 10:02:46 crc kubenswrapper[4735]: I0128 10:02:46.947309 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 10:02:47 crc kubenswrapper[4735]: I0128 10:02:47.521705 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e618722b-4792-416d-8d97-abc07296800c" path="/var/lib/kubelet/pods/e618722b-4792-416d-8d97-abc07296800c/volumes" Jan 28 10:02:47 crc kubenswrapper[4735]: I0128 10:02:47.813721 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b5ceddc-88b4-4f42-80a4-0180dc880b6d","Type":"ContainerStarted","Data":"7046cb9959bc1274ba6fd5a571c2e7319732801d084e981bf5795f34de16d9b8"} Jan 28 10:02:48 crc kubenswrapper[4735]: I0128 10:02:48.824425 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b5ceddc-88b4-4f42-80a4-0180dc880b6d","Type":"ContainerStarted","Data":"1d591a3b7486f806970cba4a9127ea982ac5b219f7940c52afbc024dc2734b4f"} Jan 28 10:02:52 crc kubenswrapper[4735]: I0128 10:02:52.883252 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b5ceddc-88b4-4f42-80a4-0180dc880b6d","Type":"ContainerStarted","Data":"54116d5b91543fdfac32c6d19ab4b37667fad020126d5439a6731b52da3574e0"} Jan 28 10:02:52 crc kubenswrapper[4735]: I0128 10:02:52.886616 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-p8gnh" event={"ID":"71e767c2-123a-497e-83b8-9fe0b8f45c77","Type":"ContainerStarted","Data":"b4cb50e9b49ad2276bedf78883f1c90f67839d845413f1e6b967cea5b35c3212"} Jan 28 10:02:52 crc kubenswrapper[4735]: I0128 10:02:52.909809 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-p8gnh" podStartSLOduration=2.031853896 podStartE2EDuration="8.909793019s" podCreationTimestamp="2026-01-28 10:02:44 +0000 UTC" firstStartedPulling="2026-01-28 10:02:45.15608187 +0000 UTC m=+1218.305746243" lastFinishedPulling="2026-01-28 10:02:52.034020983 +0000 UTC m=+1225.183685366" observedRunningTime="2026-01-28 10:02:52.904885185 +0000 UTC m=+1226.054549578" watchObservedRunningTime="2026-01-28 10:02:52.909793019 +0000 UTC m=+1226.059457392" Jan 28 10:02:53 crc kubenswrapper[4735]: I0128 10:02:53.100433 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 10:02:53 crc kubenswrapper[4735]: I0128 10:02:53.100485 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 10:02:53 crc kubenswrapper[4735]: I0128 10:02:53.134906 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 10:02:53 crc kubenswrapper[4735]: I0128 10:02:53.162231 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 10:02:53 crc kubenswrapper[4735]: I0128 10:02:53.896795 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 10:02:53 crc kubenswrapper[4735]: I0128 10:02:53.897168 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 10:02:54 crc kubenswrapper[4735]: I0128 10:02:54.908357 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b5ceddc-88b4-4f42-80a4-0180dc880b6d","Type":"ContainerStarted","Data":"f606ad10f863fa54cbe6b78e14638c665804394ef19bccfd7bbfdf6bb6c71a80"} Jan 28 10:02:54 crc kubenswrapper[4735]: I0128 10:02:54.908736 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 10:02:55 crc kubenswrapper[4735]: I0128 10:02:55.833996 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 10:02:55 crc kubenswrapper[4735]: I0128 10:02:55.835889 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 10:02:55 crc kubenswrapper[4735]: I0128 10:02:55.870537 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.5060856559999998 podStartE2EDuration="10.870521865s" podCreationTimestamp="2026-01-28 10:02:45 +0000 UTC" firstStartedPulling="2026-01-28 10:02:46.68433379 +0000 UTC m=+1219.833998203" lastFinishedPulling="2026-01-28 10:02:54.048770039 +0000 UTC m=+1227.198434412" observedRunningTime="2026-01-28 10:02:54.929720715 +0000 UTC m=+1228.079385098" watchObservedRunningTime="2026-01-28 10:02:55.870521865 +0000 UTC m=+1229.020186238" Jan 28 10:03:05 crc kubenswrapper[4735]: I0128 10:03:05.051453 4735 generic.go:334] "Generic (PLEG): container finished" podID="71e767c2-123a-497e-83b8-9fe0b8f45c77" containerID="b4cb50e9b49ad2276bedf78883f1c90f67839d845413f1e6b967cea5b35c3212" exitCode=0 Jan 28 10:03:05 crc kubenswrapper[4735]: I0128 10:03:05.051472 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-p8gnh" event={"ID":"71e767c2-123a-497e-83b8-9fe0b8f45c77","Type":"ContainerDied","Data":"b4cb50e9b49ad2276bedf78883f1c90f67839d845413f1e6b967cea5b35c3212"} Jan 28 10:03:06 crc kubenswrapper[4735]: I0128 10:03:06.485115 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-p8gnh" Jan 28 10:03:06 crc kubenswrapper[4735]: I0128 10:03:06.635256 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-scripts\") pod \"71e767c2-123a-497e-83b8-9fe0b8f45c77\" (UID: \"71e767c2-123a-497e-83b8-9fe0b8f45c77\") " Jan 28 10:03:06 crc kubenswrapper[4735]: I0128 10:03:06.635384 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c49zl\" (UniqueName: \"kubernetes.io/projected/71e767c2-123a-497e-83b8-9fe0b8f45c77-kube-api-access-c49zl\") pod \"71e767c2-123a-497e-83b8-9fe0b8f45c77\" (UID: \"71e767c2-123a-497e-83b8-9fe0b8f45c77\") " Jan 28 10:03:06 crc kubenswrapper[4735]: I0128 10:03:06.635457 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-config-data\") pod \"71e767c2-123a-497e-83b8-9fe0b8f45c77\" (UID: \"71e767c2-123a-497e-83b8-9fe0b8f45c77\") " Jan 28 10:03:06 crc kubenswrapper[4735]: I0128 10:03:06.635626 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-combined-ca-bundle\") pod \"71e767c2-123a-497e-83b8-9fe0b8f45c77\" (UID: \"71e767c2-123a-497e-83b8-9fe0b8f45c77\") " Jan 28 10:03:06 crc kubenswrapper[4735]: I0128 10:03:06.645633 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71e767c2-123a-497e-83b8-9fe0b8f45c77-kube-api-access-c49zl" (OuterVolumeSpecName: "kube-api-access-c49zl") pod "71e767c2-123a-497e-83b8-9fe0b8f45c77" (UID: "71e767c2-123a-497e-83b8-9fe0b8f45c77"). InnerVolumeSpecName "kube-api-access-c49zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:03:06 crc kubenswrapper[4735]: I0128 10:03:06.645968 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-scripts" (OuterVolumeSpecName: "scripts") pod "71e767c2-123a-497e-83b8-9fe0b8f45c77" (UID: "71e767c2-123a-497e-83b8-9fe0b8f45c77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:06 crc kubenswrapper[4735]: I0128 10:03:06.679533 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-config-data" (OuterVolumeSpecName: "config-data") pod "71e767c2-123a-497e-83b8-9fe0b8f45c77" (UID: "71e767c2-123a-497e-83b8-9fe0b8f45c77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:06 crc kubenswrapper[4735]: I0128 10:03:06.679601 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71e767c2-123a-497e-83b8-9fe0b8f45c77" (UID: "71e767c2-123a-497e-83b8-9fe0b8f45c77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:06 crc kubenswrapper[4735]: I0128 10:03:06.737788 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c49zl\" (UniqueName: \"kubernetes.io/projected/71e767c2-123a-497e-83b8-9fe0b8f45c77-kube-api-access-c49zl\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:06 crc kubenswrapper[4735]: I0128 10:03:06.737838 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:06 crc kubenswrapper[4735]: I0128 10:03:06.737858 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:06 crc kubenswrapper[4735]: I0128 10:03:06.737876 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71e767c2-123a-497e-83b8-9fe0b8f45c77-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.077940 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-p8gnh" event={"ID":"71e767c2-123a-497e-83b8-9fe0b8f45c77","Type":"ContainerDied","Data":"3d8eedf91e4dd1fe7cd2b33eea52a94a77d796b1c5909f52c19a3dd356b51082"} Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.077991 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d8eedf91e4dd1fe7cd2b33eea52a94a77d796b1c5909f52c19a3dd356b51082" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.078089 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-p8gnh" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.226131 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 10:03:07 crc kubenswrapper[4735]: E0128 10:03:07.226627 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e767c2-123a-497e-83b8-9fe0b8f45c77" containerName="nova-cell0-conductor-db-sync" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.226657 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e767c2-123a-497e-83b8-9fe0b8f45c77" containerName="nova-cell0-conductor-db-sync" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.226989 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="71e767c2-123a-497e-83b8-9fe0b8f45c77" containerName="nova-cell0-conductor-db-sync" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.233547 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.254893 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnnjw\" (UniqueName: \"kubernetes.io/projected/dd58e731-439b-4ecf-b46a-2e978174ddf9-kube-api-access-xnnjw\") pod \"nova-cell0-conductor-0\" (UID: \"dd58e731-439b-4ecf-b46a-2e978174ddf9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.255006 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd58e731-439b-4ecf-b46a-2e978174ddf9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dd58e731-439b-4ecf-b46a-2e978174ddf9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.255072 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd58e731-439b-4ecf-b46a-2e978174ddf9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dd58e731-439b-4ecf-b46a-2e978174ddf9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.277644 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-w28c6" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.280501 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.297084 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.357031 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd58e731-439b-4ecf-b46a-2e978174ddf9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dd58e731-439b-4ecf-b46a-2e978174ddf9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.357145 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnnjw\" (UniqueName: \"kubernetes.io/projected/dd58e731-439b-4ecf-b46a-2e978174ddf9-kube-api-access-xnnjw\") pod \"nova-cell0-conductor-0\" (UID: \"dd58e731-439b-4ecf-b46a-2e978174ddf9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.357203 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd58e731-439b-4ecf-b46a-2e978174ddf9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dd58e731-439b-4ecf-b46a-2e978174ddf9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.365395 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd58e731-439b-4ecf-b46a-2e978174ddf9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dd58e731-439b-4ecf-b46a-2e978174ddf9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.366257 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd58e731-439b-4ecf-b46a-2e978174ddf9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dd58e731-439b-4ecf-b46a-2e978174ddf9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.374579 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnnjw\" (UniqueName: \"kubernetes.io/projected/dd58e731-439b-4ecf-b46a-2e978174ddf9-kube-api-access-xnnjw\") pod \"nova-cell0-conductor-0\" (UID: \"dd58e731-439b-4ecf-b46a-2e978174ddf9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 10:03:07 crc kubenswrapper[4735]: I0128 10:03:07.599194 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 10:03:08 crc kubenswrapper[4735]: I0128 10:03:08.074169 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 10:03:08 crc kubenswrapper[4735]: W0128 10:03:08.081154 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd58e731_439b_4ecf_b46a_2e978174ddf9.slice/crio-78a3cd6b3519ebffb729dd4b123876c175306bcd11ce4925917e04af283d74e1 WatchSource:0}: Error finding container 78a3cd6b3519ebffb729dd4b123876c175306bcd11ce4925917e04af283d74e1: Status 404 returned error can't find the container with id 78a3cd6b3519ebffb729dd4b123876c175306bcd11ce4925917e04af283d74e1 Jan 28 10:03:09 crc kubenswrapper[4735]: I0128 10:03:09.101732 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dd58e731-439b-4ecf-b46a-2e978174ddf9","Type":"ContainerStarted","Data":"8f4227cb054af887c2c937b13bcf8b355ed7008960211b18756488c4aee8a3f2"} Jan 28 10:03:09 crc kubenswrapper[4735]: I0128 10:03:09.102457 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 28 10:03:09 crc kubenswrapper[4735]: I0128 10:03:09.102469 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dd58e731-439b-4ecf-b46a-2e978174ddf9","Type":"ContainerStarted","Data":"78a3cd6b3519ebffb729dd4b123876c175306bcd11ce4925917e04af283d74e1"} Jan 28 10:03:09 crc kubenswrapper[4735]: I0128 10:03:09.119604 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.119576578 podStartE2EDuration="2.119576578s" podCreationTimestamp="2026-01-28 10:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:03:09.117010725 +0000 UTC m=+1242.266675118" watchObservedRunningTime="2026-01-28 10:03:09.119576578 +0000 UTC m=+1242.269240971" Jan 28 10:03:16 crc kubenswrapper[4735]: I0128 10:03:16.197496 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 10:03:17 crc kubenswrapper[4735]: I0128 10:03:17.648262 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.114570 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-fjbrl"] Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.115980 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fjbrl" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.118041 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.118087 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.130077 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-fjbrl"] Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.270768 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-scripts\") pod \"nova-cell0-cell-mapping-fjbrl\" (UID: \"553981e5-1a34-4046-b06f-cb8438042128\") " pod="openstack/nova-cell0-cell-mapping-fjbrl" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.270828 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-config-data\") pod \"nova-cell0-cell-mapping-fjbrl\" (UID: \"553981e5-1a34-4046-b06f-cb8438042128\") " pod="openstack/nova-cell0-cell-mapping-fjbrl" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.270878 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fjbrl\" (UID: \"553981e5-1a34-4046-b06f-cb8438042128\") " pod="openstack/nova-cell0-cell-mapping-fjbrl" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.270947 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c8zs\" (UniqueName: \"kubernetes.io/projected/553981e5-1a34-4046-b06f-cb8438042128-kube-api-access-6c8zs\") pod \"nova-cell0-cell-mapping-fjbrl\" (UID: \"553981e5-1a34-4046-b06f-cb8438042128\") " pod="openstack/nova-cell0-cell-mapping-fjbrl" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.272463 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.273880 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.277061 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.287555 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.372085 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c8zs\" (UniqueName: \"kubernetes.io/projected/553981e5-1a34-4046-b06f-cb8438042128-kube-api-access-6c8zs\") pod \"nova-cell0-cell-mapping-fjbrl\" (UID: \"553981e5-1a34-4046-b06f-cb8438042128\") " pod="openstack/nova-cell0-cell-mapping-fjbrl" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.372162 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-logs\") pod \"nova-api-0\" (UID: \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\") " pod="openstack/nova-api-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.372187 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-config-data\") pod \"nova-api-0\" (UID: \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\") " pod="openstack/nova-api-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.372203 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\") " pod="openstack/nova-api-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.372241 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-scripts\") pod \"nova-cell0-cell-mapping-fjbrl\" (UID: \"553981e5-1a34-4046-b06f-cb8438042128\") " pod="openstack/nova-cell0-cell-mapping-fjbrl" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.372273 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-config-data\") pod \"nova-cell0-cell-mapping-fjbrl\" (UID: \"553981e5-1a34-4046-b06f-cb8438042128\") " pod="openstack/nova-cell0-cell-mapping-fjbrl" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.372480 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xddf7\" (UniqueName: \"kubernetes.io/projected/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-kube-api-access-xddf7\") pod \"nova-api-0\" (UID: \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\") " pod="openstack/nova-api-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.372516 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fjbrl\" (UID: \"553981e5-1a34-4046-b06f-cb8438042128\") " pod="openstack/nova-cell0-cell-mapping-fjbrl" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.383378 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-scripts\") pod \"nova-cell0-cell-mapping-fjbrl\" (UID: \"553981e5-1a34-4046-b06f-cb8438042128\") " pod="openstack/nova-cell0-cell-mapping-fjbrl" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.383603 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fjbrl\" (UID: \"553981e5-1a34-4046-b06f-cb8438042128\") " pod="openstack/nova-cell0-cell-mapping-fjbrl" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.383708 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-config-data\") pod \"nova-cell0-cell-mapping-fjbrl\" (UID: \"553981e5-1a34-4046-b06f-cb8438042128\") " pod="openstack/nova-cell0-cell-mapping-fjbrl" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.408842 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.410393 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.412589 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.422723 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c8zs\" (UniqueName: \"kubernetes.io/projected/553981e5-1a34-4046-b06f-cb8438042128-kube-api-access-6c8zs\") pod \"nova-cell0-cell-mapping-fjbrl\" (UID: \"553981e5-1a34-4046-b06f-cb8438042128\") " pod="openstack/nova-cell0-cell-mapping-fjbrl" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.435134 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fjbrl" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.437359 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.477607 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xddf7\" (UniqueName: \"kubernetes.io/projected/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-kube-api-access-xddf7\") pod \"nova-api-0\" (UID: \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\") " pod="openstack/nova-api-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.477824 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-logs\") pod \"nova-api-0\" (UID: \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\") " pod="openstack/nova-api-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.477859 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-config-data\") pod \"nova-api-0\" (UID: \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\") " pod="openstack/nova-api-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.477883 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\") " pod="openstack/nova-api-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.479974 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-logs\") pod \"nova-api-0\" (UID: \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\") " pod="openstack/nova-api-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.519132 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.536762 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-config-data\") pod \"nova-api-0\" (UID: \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\") " pod="openstack/nova-api-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.536852 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.539955 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.562630 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xddf7\" (UniqueName: \"kubernetes.io/projected/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-kube-api-access-xddf7\") pod \"nova-api-0\" (UID: \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\") " pod="openstack/nova-api-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.580222 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25319b8b-b6a5-46fc-927f-e98c107bfc0e-logs\") pod \"nova-metadata-0\" (UID: \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\") " pod="openstack/nova-metadata-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.580283 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25319b8b-b6a5-46fc-927f-e98c107bfc0e-config-data\") pod \"nova-metadata-0\" (UID: \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\") " pod="openstack/nova-metadata-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.580324 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25319b8b-b6a5-46fc-927f-e98c107bfc0e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\") " pod="openstack/nova-metadata-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.580356 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chgxs\" (UniqueName: \"kubernetes.io/projected/25319b8b-b6a5-46fc-927f-e98c107bfc0e-kube-api-access-chgxs\") pod \"nova-metadata-0\" (UID: \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\") " pod="openstack/nova-metadata-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.600490 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\") " pod="openstack/nova-api-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.613976 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.685608 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25319b8b-b6a5-46fc-927f-e98c107bfc0e-logs\") pod \"nova-metadata-0\" (UID: \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\") " pod="openstack/nova-metadata-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.685718 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25319b8b-b6a5-46fc-927f-e98c107bfc0e-config-data\") pod \"nova-metadata-0\" (UID: \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\") " pod="openstack/nova-metadata-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.685827 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25319b8b-b6a5-46fc-927f-e98c107bfc0e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\") " pod="openstack/nova-metadata-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.685900 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chgxs\" (UniqueName: \"kubernetes.io/projected/25319b8b-b6a5-46fc-927f-e98c107bfc0e-kube-api-access-chgxs\") pod \"nova-metadata-0\" (UID: \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\") " pod="openstack/nova-metadata-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.685942 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td8s7\" (UniqueName: \"kubernetes.io/projected/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-kube-api-access-td8s7\") pod \"nova-scheduler-0\" (UID: \"10bcdbff-4b2f-4447-81ac-30d03aaef0f9\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.686228 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-config-data\") pod \"nova-scheduler-0\" (UID: \"10bcdbff-4b2f-4447-81ac-30d03aaef0f9\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.686277 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"10bcdbff-4b2f-4447-81ac-30d03aaef0f9\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.686897 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25319b8b-b6a5-46fc-927f-e98c107bfc0e-logs\") pod \"nova-metadata-0\" (UID: \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\") " pod="openstack/nova-metadata-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.693725 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lqz84"] Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.699508 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.706541 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25319b8b-b6a5-46fc-927f-e98c107bfc0e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\") " pod="openstack/nova-metadata-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.706619 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25319b8b-b6a5-46fc-927f-e98c107bfc0e-config-data\") pod \"nova-metadata-0\" (UID: \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\") " pod="openstack/nova-metadata-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.709023 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lqz84"] Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.714727 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chgxs\" (UniqueName: \"kubernetes.io/projected/25319b8b-b6a5-46fc-927f-e98c107bfc0e-kube-api-access-chgxs\") pod \"nova-metadata-0\" (UID: \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\") " pod="openstack/nova-metadata-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.732253 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.733435 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.735974 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.801606 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.801945 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs7rw\" (UniqueName: \"kubernetes.io/projected/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-kube-api-access-bs7rw\") pod \"nova-cell1-novncproxy-0\" (UID: \"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.801983 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.802032 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.802059 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.802119 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td8s7\" (UniqueName: \"kubernetes.io/projected/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-kube-api-access-td8s7\") pod \"nova-scheduler-0\" (UID: \"10bcdbff-4b2f-4447-81ac-30d03aaef0f9\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.802158 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.802198 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-config\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.802237 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl7f4\" (UniqueName: \"kubernetes.io/projected/1a945053-f5d2-4d71-9e21-fdc3d7926265-kube-api-access-jl7f4\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.802561 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-config-data\") pod \"nova-scheduler-0\" (UID: \"10bcdbff-4b2f-4447-81ac-30d03aaef0f9\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.802597 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.802633 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"10bcdbff-4b2f-4447-81ac-30d03aaef0f9\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.812712 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-config-data\") pod \"nova-scheduler-0\" (UID: \"10bcdbff-4b2f-4447-81ac-30d03aaef0f9\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.814965 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"10bcdbff-4b2f-4447-81ac-30d03aaef0f9\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.818430 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.824380 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td8s7\" (UniqueName: \"kubernetes.io/projected/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-kube-api-access-td8s7\") pod \"nova-scheduler-0\" (UID: \"10bcdbff-4b2f-4447-81ac-30d03aaef0f9\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.894533 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.905955 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.906002 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs7rw\" (UniqueName: \"kubernetes.io/projected/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-kube-api-access-bs7rw\") pod \"nova-cell1-novncproxy-0\" (UID: \"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.906026 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.906049 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.906068 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.906100 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.906125 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-config\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.906149 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl7f4\" (UniqueName: \"kubernetes.io/projected/1a945053-f5d2-4d71-9e21-fdc3d7926265-kube-api-access-jl7f4\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.906205 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.907076 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.907464 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.907607 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.908095 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.910966 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.911002 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.911641 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-config\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.926236 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs7rw\" (UniqueName: \"kubernetes.io/projected/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-kube-api-access-bs7rw\") pod \"nova-cell1-novncproxy-0\" (UID: \"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.934448 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl7f4\" (UniqueName: \"kubernetes.io/projected/1a945053-f5d2-4d71-9e21-fdc3d7926265-kube-api-access-jl7f4\") pod \"dnsmasq-dns-845d6d6f59-lqz84\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:18 crc kubenswrapper[4735]: I0128 10:03:18.996396 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.020561 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.039766 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.108242 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.185159 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-fjbrl"] Jan 28 10:03:19 crc kubenswrapper[4735]: W0128 10:03:19.218284 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod553981e5_1a34_4046_b06f_cb8438042128.slice/crio-b56f56a1534e063eccdaef099a3fccc670458600db65e9dcc66e5469e0cb391a WatchSource:0}: Error finding container b56f56a1534e063eccdaef099a3fccc670458600db65e9dcc66e5469e0cb391a: Status 404 returned error can't find the container with id b56f56a1534e063eccdaef099a3fccc670458600db65e9dcc66e5469e0cb391a Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.265811 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vws4l"] Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.267213 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vws4l" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.270395 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.270552 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.282191 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vws4l"] Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.360043 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.419273 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vws4l\" (UID: \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\") " pod="openstack/nova-cell1-conductor-db-sync-vws4l" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.419642 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-scripts\") pod \"nova-cell1-conductor-db-sync-vws4l\" (UID: \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\") " pod="openstack/nova-cell1-conductor-db-sync-vws4l" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.419666 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt7df\" (UniqueName: \"kubernetes.io/projected/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-kube-api-access-pt7df\") pod \"nova-cell1-conductor-db-sync-vws4l\" (UID: \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\") " pod="openstack/nova-cell1-conductor-db-sync-vws4l" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.419719 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-config-data\") pod \"nova-cell1-conductor-db-sync-vws4l\" (UID: \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\") " pod="openstack/nova-cell1-conductor-db-sync-vws4l" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.521180 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vws4l\" (UID: \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\") " pod="openstack/nova-cell1-conductor-db-sync-vws4l" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.521245 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-scripts\") pod \"nova-cell1-conductor-db-sync-vws4l\" (UID: \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\") " pod="openstack/nova-cell1-conductor-db-sync-vws4l" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.521263 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt7df\" (UniqueName: \"kubernetes.io/projected/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-kube-api-access-pt7df\") pod \"nova-cell1-conductor-db-sync-vws4l\" (UID: \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\") " pod="openstack/nova-cell1-conductor-db-sync-vws4l" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.521306 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-config-data\") pod \"nova-cell1-conductor-db-sync-vws4l\" (UID: \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\") " pod="openstack/nova-cell1-conductor-db-sync-vws4l" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.527930 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vws4l\" (UID: \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\") " pod="openstack/nova-cell1-conductor-db-sync-vws4l" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.527948 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-scripts\") pod \"nova-cell1-conductor-db-sync-vws4l\" (UID: \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\") " pod="openstack/nova-cell1-conductor-db-sync-vws4l" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.532204 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-config-data\") pod \"nova-cell1-conductor-db-sync-vws4l\" (UID: \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\") " pod="openstack/nova-cell1-conductor-db-sync-vws4l" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.550328 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt7df\" (UniqueName: \"kubernetes.io/projected/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-kube-api-access-pt7df\") pod \"nova-cell1-conductor-db-sync-vws4l\" (UID: \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\") " pod="openstack/nova-cell1-conductor-db-sync-vws4l" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.553756 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.598772 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vws4l" Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.660634 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 10:03:19 crc kubenswrapper[4735]: W0128 10:03:19.665530 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10bcdbff_4b2f_4447_81ac_30d03aaef0f9.slice/crio-f47e393c1a91722a460f499ab1ba2e3ba13de72fa58618c3588641f817d2008d WatchSource:0}: Error finding container f47e393c1a91722a460f499ab1ba2e3ba13de72fa58618c3588641f817d2008d: Status 404 returned error can't find the container with id f47e393c1a91722a460f499ab1ba2e3ba13de72fa58618c3588641f817d2008d Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.788927 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lqz84"] Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.934661 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 10:03:19 crc kubenswrapper[4735]: W0128 10:03:19.936738 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8b6f417_ef4f_4a3f_8603_7eb8b57714ff.slice/crio-a50b4b6e361b8b172ed5fe4715cf508dbbf5f55461dc0e143e9ab7d12a6959ff WatchSource:0}: Error finding container a50b4b6e361b8b172ed5fe4715cf508dbbf5f55461dc0e143e9ab7d12a6959ff: Status 404 returned error can't find the container with id a50b4b6e361b8b172ed5fe4715cf508dbbf5f55461dc0e143e9ab7d12a6959ff Jan 28 10:03:19 crc kubenswrapper[4735]: I0128 10:03:19.977923 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vws4l"] Jan 28 10:03:20 crc kubenswrapper[4735]: I0128 10:03:20.223275 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8","Type":"ContainerStarted","Data":"dff692ea80f3fcb57a15cdcab336f99acc6610eb293923e756009989bde69e68"} Jan 28 10:03:20 crc kubenswrapper[4735]: I0128 10:03:20.225221 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vws4l" event={"ID":"d7066dce-cc3b-4aff-9747-d10ef6eeb37d","Type":"ContainerStarted","Data":"82bd16090dd3bf04431102c84187e9644ed15ce87d7c7c93e9041f1f02e7828a"} Jan 28 10:03:20 crc kubenswrapper[4735]: I0128 10:03:20.225263 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vws4l" event={"ID":"d7066dce-cc3b-4aff-9747-d10ef6eeb37d","Type":"ContainerStarted","Data":"adb2835d8f575ea024cdd31bb2503ba7aa5140bfa372e71758f58b4b5f2e5c93"} Jan 28 10:03:20 crc kubenswrapper[4735]: I0128 10:03:20.230129 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff","Type":"ContainerStarted","Data":"a50b4b6e361b8b172ed5fe4715cf508dbbf5f55461dc0e143e9ab7d12a6959ff"} Jan 28 10:03:20 crc kubenswrapper[4735]: I0128 10:03:20.239618 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25319b8b-b6a5-46fc-927f-e98c107bfc0e","Type":"ContainerStarted","Data":"9ccbd8331473f662c167b85cd6a295ab4bb1f7d93f8eabc39dc0973cdf5800e7"} Jan 28 10:03:20 crc kubenswrapper[4735]: I0128 10:03:20.243289 4735 generic.go:334] "Generic (PLEG): container finished" podID="1a945053-f5d2-4d71-9e21-fdc3d7926265" containerID="edd8d69334f91bdee372fba13df8b221d8cce87c0244d29bf731a7005bb9938a" exitCode=0 Jan 28 10:03:20 crc kubenswrapper[4735]: I0128 10:03:20.243372 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" event={"ID":"1a945053-f5d2-4d71-9e21-fdc3d7926265","Type":"ContainerDied","Data":"edd8d69334f91bdee372fba13df8b221d8cce87c0244d29bf731a7005bb9938a"} Jan 28 10:03:20 crc kubenswrapper[4735]: I0128 10:03:20.243399 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" event={"ID":"1a945053-f5d2-4d71-9e21-fdc3d7926265","Type":"ContainerStarted","Data":"0dc7f685b1923c6448fac8dcc91324dcde6bceedd4e83c83a4c1ca289c382342"} Jan 28 10:03:20 crc kubenswrapper[4735]: I0128 10:03:20.251108 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fjbrl" event={"ID":"553981e5-1a34-4046-b06f-cb8438042128","Type":"ContainerStarted","Data":"0ddf1a6cbd8429be5326f7cee4fea1624e6f7cf6e53ea11ce4c15b1c9bda090e"} Jan 28 10:03:20 crc kubenswrapper[4735]: I0128 10:03:20.251151 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fjbrl" event={"ID":"553981e5-1a34-4046-b06f-cb8438042128","Type":"ContainerStarted","Data":"b56f56a1534e063eccdaef099a3fccc670458600db65e9dcc66e5469e0cb391a"} Jan 28 10:03:20 crc kubenswrapper[4735]: I0128 10:03:20.251575 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-vws4l" podStartSLOduration=1.251560644 podStartE2EDuration="1.251560644s" podCreationTimestamp="2026-01-28 10:03:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:03:20.249547043 +0000 UTC m=+1253.399211416" watchObservedRunningTime="2026-01-28 10:03:20.251560644 +0000 UTC m=+1253.401225017" Jan 28 10:03:20 crc kubenswrapper[4735]: I0128 10:03:20.255623 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"10bcdbff-4b2f-4447-81ac-30d03aaef0f9","Type":"ContainerStarted","Data":"f47e393c1a91722a460f499ab1ba2e3ba13de72fa58618c3588641f817d2008d"} Jan 28 10:03:20 crc kubenswrapper[4735]: I0128 10:03:20.343768 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-fjbrl" podStartSLOduration=2.343739502 podStartE2EDuration="2.343739502s" podCreationTimestamp="2026-01-28 10:03:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:03:20.296724045 +0000 UTC m=+1253.446388418" watchObservedRunningTime="2026-01-28 10:03:20.343739502 +0000 UTC m=+1253.493403885" Jan 28 10:03:21 crc kubenswrapper[4735]: I0128 10:03:21.268115 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" event={"ID":"1a945053-f5d2-4d71-9e21-fdc3d7926265","Type":"ContainerStarted","Data":"8c8edb4dd8fafb773393c97c22b4a0d6ebc58aa04024b271a6e016de3bdaf5b5"} Jan 28 10:03:21 crc kubenswrapper[4735]: I0128 10:03:21.297507 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" podStartSLOduration=3.297488637 podStartE2EDuration="3.297488637s" podCreationTimestamp="2026-01-28 10:03:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:03:21.288912741 +0000 UTC m=+1254.438577144" watchObservedRunningTime="2026-01-28 10:03:21.297488637 +0000 UTC m=+1254.447153000" Jan 28 10:03:21 crc kubenswrapper[4735]: I0128 10:03:21.707545 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 10:03:21 crc kubenswrapper[4735]: I0128 10:03:21.722250 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 10:03:22 crc kubenswrapper[4735]: I0128 10:03:22.281050 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8","Type":"ContainerStarted","Data":"23745a1fd89beb43e2f29f60e304138b35b85c6918ddd84e7dc9483848db3b61"} Jan 28 10:03:22 crc kubenswrapper[4735]: I0128 10:03:22.283146 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25319b8b-b6a5-46fc-927f-e98c107bfc0e","Type":"ContainerStarted","Data":"b500d99cc28a4c58293b28d6abcfdf7fb0030ec6f9af23baf06629b1af514002"} Jan 28 10:03:22 crc kubenswrapper[4735]: I0128 10:03:22.283400 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:23 crc kubenswrapper[4735]: I0128 10:03:23.292870 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"10bcdbff-4b2f-4447-81ac-30d03aaef0f9","Type":"ContainerStarted","Data":"becd72d522ae81542b73db82e5b79f256ff562622135af88c02f7984eb982854"} Jan 28 10:03:23 crc kubenswrapper[4735]: I0128 10:03:23.296528 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8","Type":"ContainerStarted","Data":"71adb30844ccb26c5e445c43bc8a31bdba1da2cc319b7208825df9421a54629b"} Jan 28 10:03:23 crc kubenswrapper[4735]: I0128 10:03:23.300382 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff","Type":"ContainerStarted","Data":"52deaa193137e26997792b53653a7037ec145f886eec81a6d4e38d3cc5e704bd"} Jan 28 10:03:23 crc kubenswrapper[4735]: I0128 10:03:23.300456 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d8b6f417-ef4f-4a3f-8603-7eb8b57714ff" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://52deaa193137e26997792b53653a7037ec145f886eec81a6d4e38d3cc5e704bd" gracePeriod=30 Jan 28 10:03:23 crc kubenswrapper[4735]: I0128 10:03:23.306797 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25319b8b-b6a5-46fc-927f-e98c107bfc0e","Type":"ContainerStarted","Data":"dec6ffdd43f7c0945a00ff0be119826b2bed3b84e45d7c63c867bc5bc194e70b"} Jan 28 10:03:23 crc kubenswrapper[4735]: I0128 10:03:23.306929 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="25319b8b-b6a5-46fc-927f-e98c107bfc0e" containerName="nova-metadata-log" containerID="cri-o://b500d99cc28a4c58293b28d6abcfdf7fb0030ec6f9af23baf06629b1af514002" gracePeriod=30 Jan 28 10:03:23 crc kubenswrapper[4735]: I0128 10:03:23.306994 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="25319b8b-b6a5-46fc-927f-e98c107bfc0e" containerName="nova-metadata-metadata" containerID="cri-o://dec6ffdd43f7c0945a00ff0be119826b2bed3b84e45d7c63c867bc5bc194e70b" gracePeriod=30 Jan 28 10:03:23 crc kubenswrapper[4735]: I0128 10:03:23.319863 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.096458572 podStartE2EDuration="5.319839662s" podCreationTimestamp="2026-01-28 10:03:18 +0000 UTC" firstStartedPulling="2026-01-28 10:03:19.668088513 +0000 UTC m=+1252.817752886" lastFinishedPulling="2026-01-28 10:03:22.891469603 +0000 UTC m=+1256.041133976" observedRunningTime="2026-01-28 10:03:23.314550033 +0000 UTC m=+1256.464214406" watchObservedRunningTime="2026-01-28 10:03:23.319839662 +0000 UTC m=+1256.469504035" Jan 28 10:03:23 crc kubenswrapper[4735]: I0128 10:03:23.369926 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.484485856 podStartE2EDuration="5.369901202s" podCreationTimestamp="2026-01-28 10:03:18 +0000 UTC" firstStartedPulling="2026-01-28 10:03:19.386437604 +0000 UTC m=+1252.536101977" lastFinishedPulling="2026-01-28 10:03:21.27185295 +0000 UTC m=+1254.421517323" observedRunningTime="2026-01-28 10:03:23.339954586 +0000 UTC m=+1256.489618959" watchObservedRunningTime="2026-01-28 10:03:23.369901202 +0000 UTC m=+1256.519565595" Jan 28 10:03:23 crc kubenswrapper[4735]: I0128 10:03:23.386239 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.677264495 podStartE2EDuration="5.386217548s" podCreationTimestamp="2026-01-28 10:03:18 +0000 UTC" firstStartedPulling="2026-01-28 10:03:19.565097902 +0000 UTC m=+1252.714762275" lastFinishedPulling="2026-01-28 10:03:21.274050955 +0000 UTC m=+1254.423715328" observedRunningTime="2026-01-28 10:03:23.357677811 +0000 UTC m=+1256.507342184" watchObservedRunningTime="2026-01-28 10:03:23.386217548 +0000 UTC m=+1256.535881931" Jan 28 10:03:23 crc kubenswrapper[4735]: I0128 10:03:23.405929 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.439049845 podStartE2EDuration="5.405912784s" podCreationTimestamp="2026-01-28 10:03:18 +0000 UTC" firstStartedPulling="2026-01-28 10:03:19.941726206 +0000 UTC m=+1253.091390579" lastFinishedPulling="2026-01-28 10:03:22.908589145 +0000 UTC m=+1256.058253518" observedRunningTime="2026-01-28 10:03:23.374079728 +0000 UTC m=+1256.523744101" watchObservedRunningTime="2026-01-28 10:03:23.405912784 +0000 UTC m=+1256.555577157" Jan 28 10:03:23 crc kubenswrapper[4735]: I0128 10:03:23.998159 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 10:03:23 crc kubenswrapper[4735]: I0128 10:03:23.998505 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 10:03:24 crc kubenswrapper[4735]: I0128 10:03:24.021196 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 10:03:24 crc kubenswrapper[4735]: I0128 10:03:24.109236 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:24 crc kubenswrapper[4735]: I0128 10:03:24.257939 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 10:03:24 crc kubenswrapper[4735]: I0128 10:03:24.258143 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="4af8f981-a165-4485-a3da-f6c0dfaecb00" containerName="kube-state-metrics" containerID="cri-o://a61f261abd38122f0dd117a8c8e53b0af901c4f79f571aad54afc35df1072136" gracePeriod=30 Jan 28 10:03:24 crc kubenswrapper[4735]: I0128 10:03:24.318619 4735 generic.go:334] "Generic (PLEG): container finished" podID="25319b8b-b6a5-46fc-927f-e98c107bfc0e" containerID="b500d99cc28a4c58293b28d6abcfdf7fb0030ec6f9af23baf06629b1af514002" exitCode=143 Jan 28 10:03:24 crc kubenswrapper[4735]: I0128 10:03:24.318912 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25319b8b-b6a5-46fc-927f-e98c107bfc0e","Type":"ContainerDied","Data":"b500d99cc28a4c58293b28d6abcfdf7fb0030ec6f9af23baf06629b1af514002"} Jan 28 10:03:24 crc kubenswrapper[4735]: I0128 10:03:24.746976 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 10:03:24 crc kubenswrapper[4735]: I0128 10:03:24.828508 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql8mv\" (UniqueName: \"kubernetes.io/projected/4af8f981-a165-4485-a3da-f6c0dfaecb00-kube-api-access-ql8mv\") pod \"4af8f981-a165-4485-a3da-f6c0dfaecb00\" (UID: \"4af8f981-a165-4485-a3da-f6c0dfaecb00\") " Jan 28 10:03:24 crc kubenswrapper[4735]: I0128 10:03:24.849890 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4af8f981-a165-4485-a3da-f6c0dfaecb00-kube-api-access-ql8mv" (OuterVolumeSpecName: "kube-api-access-ql8mv") pod "4af8f981-a165-4485-a3da-f6c0dfaecb00" (UID: "4af8f981-a165-4485-a3da-f6c0dfaecb00"). InnerVolumeSpecName "kube-api-access-ql8mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:03:24 crc kubenswrapper[4735]: I0128 10:03:24.931417 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql8mv\" (UniqueName: \"kubernetes.io/projected/4af8f981-a165-4485-a3da-f6c0dfaecb00-kube-api-access-ql8mv\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.329591 4735 generic.go:334] "Generic (PLEG): container finished" podID="4af8f981-a165-4485-a3da-f6c0dfaecb00" containerID="a61f261abd38122f0dd117a8c8e53b0af901c4f79f571aad54afc35df1072136" exitCode=2 Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.329647 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.329640 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4af8f981-a165-4485-a3da-f6c0dfaecb00","Type":"ContainerDied","Data":"a61f261abd38122f0dd117a8c8e53b0af901c4f79f571aad54afc35df1072136"} Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.330111 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4af8f981-a165-4485-a3da-f6c0dfaecb00","Type":"ContainerDied","Data":"3789d7fa64691a8a00c301f0ed8dd1d118526c8ad980bd58b845c9c8222c8746"} Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.330138 4735 scope.go:117] "RemoveContainer" containerID="a61f261abd38122f0dd117a8c8e53b0af901c4f79f571aad54afc35df1072136" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.353523 4735 scope.go:117] "RemoveContainer" containerID="a61f261abd38122f0dd117a8c8e53b0af901c4f79f571aad54afc35df1072136" Jan 28 10:03:25 crc kubenswrapper[4735]: E0128 10:03:25.354181 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a61f261abd38122f0dd117a8c8e53b0af901c4f79f571aad54afc35df1072136\": container with ID starting with a61f261abd38122f0dd117a8c8e53b0af901c4f79f571aad54afc35df1072136 not found: ID does not exist" containerID="a61f261abd38122f0dd117a8c8e53b0af901c4f79f571aad54afc35df1072136" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.354212 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a61f261abd38122f0dd117a8c8e53b0af901c4f79f571aad54afc35df1072136"} err="failed to get container status \"a61f261abd38122f0dd117a8c8e53b0af901c4f79f571aad54afc35df1072136\": rpc error: code = NotFound desc = could not find container \"a61f261abd38122f0dd117a8c8e53b0af901c4f79f571aad54afc35df1072136\": container with ID starting with a61f261abd38122f0dd117a8c8e53b0af901c4f79f571aad54afc35df1072136 not found: ID does not exist" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.375422 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.391697 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.409491 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 10:03:25 crc kubenswrapper[4735]: E0128 10:03:25.410013 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4af8f981-a165-4485-a3da-f6c0dfaecb00" containerName="kube-state-metrics" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.410036 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af8f981-a165-4485-a3da-f6c0dfaecb00" containerName="kube-state-metrics" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.410262 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="4af8f981-a165-4485-a3da-f6c0dfaecb00" containerName="kube-state-metrics" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.410946 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.415565 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.416016 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.422630 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.514455 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4af8f981-a165-4485-a3da-f6c0dfaecb00" path="/var/lib/kubelet/pods/4af8f981-a165-4485-a3da-f6c0dfaecb00/volumes" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.541636 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b16556d-dd01-471a-a057-84ba1afcc8de-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7b16556d-dd01-471a-a057-84ba1afcc8de\") " pod="openstack/kube-state-metrics-0" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.541805 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xdcw\" (UniqueName: \"kubernetes.io/projected/7b16556d-dd01-471a-a057-84ba1afcc8de-kube-api-access-8xdcw\") pod \"kube-state-metrics-0\" (UID: \"7b16556d-dd01-471a-a057-84ba1afcc8de\") " pod="openstack/kube-state-metrics-0" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.541851 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b16556d-dd01-471a-a057-84ba1afcc8de-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7b16556d-dd01-471a-a057-84ba1afcc8de\") " pod="openstack/kube-state-metrics-0" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.541953 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7b16556d-dd01-471a-a057-84ba1afcc8de-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7b16556d-dd01-471a-a057-84ba1afcc8de\") " pod="openstack/kube-state-metrics-0" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.643815 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xdcw\" (UniqueName: \"kubernetes.io/projected/7b16556d-dd01-471a-a057-84ba1afcc8de-kube-api-access-8xdcw\") pod \"kube-state-metrics-0\" (UID: \"7b16556d-dd01-471a-a057-84ba1afcc8de\") " pod="openstack/kube-state-metrics-0" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.643866 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b16556d-dd01-471a-a057-84ba1afcc8de-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7b16556d-dd01-471a-a057-84ba1afcc8de\") " pod="openstack/kube-state-metrics-0" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.643935 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7b16556d-dd01-471a-a057-84ba1afcc8de-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7b16556d-dd01-471a-a057-84ba1afcc8de\") " pod="openstack/kube-state-metrics-0" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.644016 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b16556d-dd01-471a-a057-84ba1afcc8de-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7b16556d-dd01-471a-a057-84ba1afcc8de\") " pod="openstack/kube-state-metrics-0" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.659189 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b16556d-dd01-471a-a057-84ba1afcc8de-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7b16556d-dd01-471a-a057-84ba1afcc8de\") " pod="openstack/kube-state-metrics-0" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.665812 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b16556d-dd01-471a-a057-84ba1afcc8de-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7b16556d-dd01-471a-a057-84ba1afcc8de\") " pod="openstack/kube-state-metrics-0" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.667274 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xdcw\" (UniqueName: \"kubernetes.io/projected/7b16556d-dd01-471a-a057-84ba1afcc8de-kube-api-access-8xdcw\") pod \"kube-state-metrics-0\" (UID: \"7b16556d-dd01-471a-a057-84ba1afcc8de\") " pod="openstack/kube-state-metrics-0" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.674456 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7b16556d-dd01-471a-a057-84ba1afcc8de-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7b16556d-dd01-471a-a057-84ba1afcc8de\") " pod="openstack/kube-state-metrics-0" Jan 28 10:03:25 crc kubenswrapper[4735]: I0128 10:03:25.734554 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 10:03:26 crc kubenswrapper[4735]: I0128 10:03:26.222111 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 10:03:26 crc kubenswrapper[4735]: I0128 10:03:26.302196 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:03:26 crc kubenswrapper[4735]: I0128 10:03:26.302991 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerName="sg-core" containerID="cri-o://54116d5b91543fdfac32c6d19ab4b37667fad020126d5439a6731b52da3574e0" gracePeriod=30 Jan 28 10:03:26 crc kubenswrapper[4735]: I0128 10:03:26.303089 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerName="ceilometer-notification-agent" containerID="cri-o://1d591a3b7486f806970cba4a9127ea982ac5b219f7940c52afbc024dc2734b4f" gracePeriod=30 Jan 28 10:03:26 crc kubenswrapper[4735]: I0128 10:03:26.303122 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerName="proxy-httpd" containerID="cri-o://f606ad10f863fa54cbe6b78e14638c665804394ef19bccfd7bbfdf6bb6c71a80" gracePeriod=30 Jan 28 10:03:26 crc kubenswrapper[4735]: I0128 10:03:26.303188 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerName="ceilometer-central-agent" containerID="cri-o://7046cb9959bc1274ba6fd5a571c2e7319732801d084e981bf5795f34de16d9b8" gracePeriod=30 Jan 28 10:03:26 crc kubenswrapper[4735]: I0128 10:03:26.341560 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7b16556d-dd01-471a-a057-84ba1afcc8de","Type":"ContainerStarted","Data":"20fcfea0c309167e9d99a10ae912fd131f758a27e452036540981d8a42041440"} Jan 28 10:03:26 crc kubenswrapper[4735]: E0128 10:03:26.908227 4735 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5ceddc_88b4_4f42_80a4_0180dc880b6d.slice/crio-conmon-7046cb9959bc1274ba6fd5a571c2e7319732801d084e981bf5795f34de16d9b8.scope\": RecentStats: unable to find data in memory cache]" Jan 28 10:03:27 crc kubenswrapper[4735]: I0128 10:03:27.353962 4735 generic.go:334] "Generic (PLEG): container finished" podID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerID="f606ad10f863fa54cbe6b78e14638c665804394ef19bccfd7bbfdf6bb6c71a80" exitCode=0 Jan 28 10:03:27 crc kubenswrapper[4735]: I0128 10:03:27.354174 4735 generic.go:334] "Generic (PLEG): container finished" podID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerID="54116d5b91543fdfac32c6d19ab4b37667fad020126d5439a6731b52da3574e0" exitCode=2 Jan 28 10:03:27 crc kubenswrapper[4735]: I0128 10:03:27.354184 4735 generic.go:334] "Generic (PLEG): container finished" podID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerID="7046cb9959bc1274ba6fd5a571c2e7319732801d084e981bf5795f34de16d9b8" exitCode=0 Jan 28 10:03:27 crc kubenswrapper[4735]: I0128 10:03:27.354136 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b5ceddc-88b4-4f42-80a4-0180dc880b6d","Type":"ContainerDied","Data":"f606ad10f863fa54cbe6b78e14638c665804394ef19bccfd7bbfdf6bb6c71a80"} Jan 28 10:03:27 crc kubenswrapper[4735]: I0128 10:03:27.354231 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b5ceddc-88b4-4f42-80a4-0180dc880b6d","Type":"ContainerDied","Data":"54116d5b91543fdfac32c6d19ab4b37667fad020126d5439a6731b52da3574e0"} Jan 28 10:03:27 crc kubenswrapper[4735]: I0128 10:03:27.354243 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b5ceddc-88b4-4f42-80a4-0180dc880b6d","Type":"ContainerDied","Data":"7046cb9959bc1274ba6fd5a571c2e7319732801d084e981bf5795f34de16d9b8"} Jan 28 10:03:27 crc kubenswrapper[4735]: I0128 10:03:27.355649 4735 generic.go:334] "Generic (PLEG): container finished" podID="553981e5-1a34-4046-b06f-cb8438042128" containerID="0ddf1a6cbd8429be5326f7cee4fea1624e6f7cf6e53ea11ce4c15b1c9bda090e" exitCode=0 Jan 28 10:03:27 crc kubenswrapper[4735]: I0128 10:03:27.355688 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fjbrl" event={"ID":"553981e5-1a34-4046-b06f-cb8438042128","Type":"ContainerDied","Data":"0ddf1a6cbd8429be5326f7cee4fea1624e6f7cf6e53ea11ce4c15b1c9bda090e"} Jan 28 10:03:27 crc kubenswrapper[4735]: I0128 10:03:27.357865 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7b16556d-dd01-471a-a057-84ba1afcc8de","Type":"ContainerStarted","Data":"517177201f31ef72c167a232e2db375631e2391e6e0dcb15b31d8119805d1dd6"} Jan 28 10:03:27 crc kubenswrapper[4735]: I0128 10:03:27.358579 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 10:03:27 crc kubenswrapper[4735]: I0128 10:03:27.415485 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.053737572 podStartE2EDuration="2.41546373s" podCreationTimestamp="2026-01-28 10:03:25 +0000 UTC" firstStartedPulling="2026-01-28 10:03:26.227210846 +0000 UTC m=+1259.376875219" lastFinishedPulling="2026-01-28 10:03:26.588937004 +0000 UTC m=+1259.738601377" observedRunningTime="2026-01-28 10:03:27.396969999 +0000 UTC m=+1260.546634372" watchObservedRunningTime="2026-01-28 10:03:27.41546373 +0000 UTC m=+1260.565128113" Jan 28 10:03:28 crc kubenswrapper[4735]: I0128 10:03:28.369391 4735 generic.go:334] "Generic (PLEG): container finished" podID="d7066dce-cc3b-4aff-9747-d10ef6eeb37d" containerID="82bd16090dd3bf04431102c84187e9644ed15ce87d7c7c93e9041f1f02e7828a" exitCode=0 Jan 28 10:03:28 crc kubenswrapper[4735]: I0128 10:03:28.369590 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vws4l" event={"ID":"d7066dce-cc3b-4aff-9747-d10ef6eeb37d","Type":"ContainerDied","Data":"82bd16090dd3bf04431102c84187e9644ed15ce87d7c7c93e9041f1f02e7828a"} Jan 28 10:03:28 crc kubenswrapper[4735]: I0128 10:03:28.764384 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fjbrl" Jan 28 10:03:28 crc kubenswrapper[4735]: I0128 10:03:28.813018 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-combined-ca-bundle\") pod \"553981e5-1a34-4046-b06f-cb8438042128\" (UID: \"553981e5-1a34-4046-b06f-cb8438042128\") " Jan 28 10:03:28 crc kubenswrapper[4735]: I0128 10:03:28.813067 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-scripts\") pod \"553981e5-1a34-4046-b06f-cb8438042128\" (UID: \"553981e5-1a34-4046-b06f-cb8438042128\") " Jan 28 10:03:28 crc kubenswrapper[4735]: I0128 10:03:28.813136 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c8zs\" (UniqueName: \"kubernetes.io/projected/553981e5-1a34-4046-b06f-cb8438042128-kube-api-access-6c8zs\") pod \"553981e5-1a34-4046-b06f-cb8438042128\" (UID: \"553981e5-1a34-4046-b06f-cb8438042128\") " Jan 28 10:03:28 crc kubenswrapper[4735]: I0128 10:03:28.813247 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-config-data\") pod \"553981e5-1a34-4046-b06f-cb8438042128\" (UID: \"553981e5-1a34-4046-b06f-cb8438042128\") " Jan 28 10:03:28 crc kubenswrapper[4735]: I0128 10:03:28.820108 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/553981e5-1a34-4046-b06f-cb8438042128-kube-api-access-6c8zs" (OuterVolumeSpecName: "kube-api-access-6c8zs") pod "553981e5-1a34-4046-b06f-cb8438042128" (UID: "553981e5-1a34-4046-b06f-cb8438042128"). InnerVolumeSpecName "kube-api-access-6c8zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:03:28 crc kubenswrapper[4735]: I0128 10:03:28.820643 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-scripts" (OuterVolumeSpecName: "scripts") pod "553981e5-1a34-4046-b06f-cb8438042128" (UID: "553981e5-1a34-4046-b06f-cb8438042128"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:28 crc kubenswrapper[4735]: I0128 10:03:28.849300 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "553981e5-1a34-4046-b06f-cb8438042128" (UID: "553981e5-1a34-4046-b06f-cb8438042128"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:28 crc kubenswrapper[4735]: I0128 10:03:28.849469 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-config-data" (OuterVolumeSpecName: "config-data") pod "553981e5-1a34-4046-b06f-cb8438042128" (UID: "553981e5-1a34-4046-b06f-cb8438042128"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:28 crc kubenswrapper[4735]: I0128 10:03:28.895826 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 10:03:28 crc kubenswrapper[4735]: I0128 10:03:28.896069 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 10:03:28 crc kubenswrapper[4735]: I0128 10:03:28.916464 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:28 crc kubenswrapper[4735]: I0128 10:03:28.916505 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:28 crc kubenswrapper[4735]: I0128 10:03:28.916521 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c8zs\" (UniqueName: \"kubernetes.io/projected/553981e5-1a34-4046-b06f-cb8438042128-kube-api-access-6c8zs\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:28 crc kubenswrapper[4735]: I0128 10:03:28.916537 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553981e5-1a34-4046-b06f-cb8438042128-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.028295 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.040932 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.078986 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.128574 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-95lqd"] Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.128910 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-95lqd" podUID="312624db-583e-4c95-b802-248457108339" containerName="dnsmasq-dns" containerID="cri-o://8ffe5e6da487c465e9f8a6c14646a04b374b6d9a576fc79b03f9315dbaf622ce" gracePeriod=10 Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.388962 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fjbrl" event={"ID":"553981e5-1a34-4046-b06f-cb8438042128","Type":"ContainerDied","Data":"b56f56a1534e063eccdaef099a3fccc670458600db65e9dcc66e5469e0cb391a"} Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.388992 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fjbrl" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.389009 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b56f56a1534e063eccdaef099a3fccc670458600db65e9dcc66e5469e0cb391a" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.394197 4735 generic.go:334] "Generic (PLEG): container finished" podID="312624db-583e-4c95-b802-248457108339" containerID="8ffe5e6da487c465e9f8a6c14646a04b374b6d9a576fc79b03f9315dbaf622ce" exitCode=0 Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.394325 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-95lqd" event={"ID":"312624db-583e-4c95-b802-248457108339","Type":"ContainerDied","Data":"8ffe5e6da487c465e9f8a6c14646a04b374b6d9a576fc79b03f9315dbaf622ce"} Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.459184 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.570085 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.570330 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8" containerName="nova-api-log" containerID="cri-o://23745a1fd89beb43e2f29f60e304138b35b85c6918ddd84e7dc9483848db3b61" gracePeriod=30 Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.570478 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8" containerName="nova-api-api" containerID="cri-o://71adb30844ccb26c5e445c43bc8a31bdba1da2cc319b7208825df9421a54629b" gracePeriod=30 Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.599890 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": EOF" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.600457 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": EOF" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.637227 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.733192 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd4cm\" (UniqueName: \"kubernetes.io/projected/312624db-583e-4c95-b802-248457108339-kube-api-access-fd4cm\") pod \"312624db-583e-4c95-b802-248457108339\" (UID: \"312624db-583e-4c95-b802-248457108339\") " Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.733534 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-ovsdbserver-sb\") pod \"312624db-583e-4c95-b802-248457108339\" (UID: \"312624db-583e-4c95-b802-248457108339\") " Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.733591 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-dns-svc\") pod \"312624db-583e-4c95-b802-248457108339\" (UID: \"312624db-583e-4c95-b802-248457108339\") " Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.733622 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-ovsdbserver-nb\") pod \"312624db-583e-4c95-b802-248457108339\" (UID: \"312624db-583e-4c95-b802-248457108339\") " Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.733719 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-dns-swift-storage-0\") pod \"312624db-583e-4c95-b802-248457108339\" (UID: \"312624db-583e-4c95-b802-248457108339\") " Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.733854 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-config\") pod \"312624db-583e-4c95-b802-248457108339\" (UID: \"312624db-583e-4c95-b802-248457108339\") " Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.747003 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312624db-583e-4c95-b802-248457108339-kube-api-access-fd4cm" (OuterVolumeSpecName: "kube-api-access-fd4cm") pod "312624db-583e-4c95-b802-248457108339" (UID: "312624db-583e-4c95-b802-248457108339"). InnerVolumeSpecName "kube-api-access-fd4cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.818451 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "312624db-583e-4c95-b802-248457108339" (UID: "312624db-583e-4c95-b802-248457108339"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.838176 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd4cm\" (UniqueName: \"kubernetes.io/projected/312624db-583e-4c95-b802-248457108339-kube-api-access-fd4cm\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.838223 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.841148 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "312624db-583e-4c95-b802-248457108339" (UID: "312624db-583e-4c95-b802-248457108339"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.886387 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-config" (OuterVolumeSpecName: "config") pod "312624db-583e-4c95-b802-248457108339" (UID: "312624db-583e-4c95-b802-248457108339"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:03:29 crc kubenswrapper[4735]: E0128 10:03:29.906992 4735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-dns-swift-storage-0 podName:312624db-583e-4c95-b802-248457108339 nodeName:}" failed. No retries permitted until 2026-01-28 10:03:30.406956522 +0000 UTC m=+1263.556620895 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-swift-storage-0" (UniqueName: "kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-dns-swift-storage-0") pod "312624db-583e-4c95-b802-248457108339" (UID: "312624db-583e-4c95-b802-248457108339") : error deleting /var/lib/kubelet/pods/312624db-583e-4c95-b802-248457108339/volume-subpaths: remove /var/lib/kubelet/pods/312624db-583e-4c95-b802-248457108339/volume-subpaths: no such file or directory Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.907271 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "312624db-583e-4c95-b802-248457108339" (UID: "312624db-583e-4c95-b802-248457108339"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.939988 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.940017 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.940027 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.951623 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vws4l" Jan 28 10:03:29 crc kubenswrapper[4735]: I0128 10:03:29.957551 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.040919 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-config-data\") pod \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\" (UID: \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\") " Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.041079 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt7df\" (UniqueName: \"kubernetes.io/projected/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-kube-api-access-pt7df\") pod \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\" (UID: \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\") " Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.041107 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-scripts\") pod \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\" (UID: \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\") " Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.041186 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-combined-ca-bundle\") pod \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\" (UID: \"d7066dce-cc3b-4aff-9747-d10ef6eeb37d\") " Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.044681 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-kube-api-access-pt7df" (OuterVolumeSpecName: "kube-api-access-pt7df") pod "d7066dce-cc3b-4aff-9747-d10ef6eeb37d" (UID: "d7066dce-cc3b-4aff-9747-d10ef6eeb37d"). InnerVolumeSpecName "kube-api-access-pt7df". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.053183 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-scripts" (OuterVolumeSpecName: "scripts") pod "d7066dce-cc3b-4aff-9747-d10ef6eeb37d" (UID: "d7066dce-cc3b-4aff-9747-d10ef6eeb37d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.067612 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7066dce-cc3b-4aff-9747-d10ef6eeb37d" (UID: "d7066dce-cc3b-4aff-9747-d10ef6eeb37d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.069285 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-config-data" (OuterVolumeSpecName: "config-data") pod "d7066dce-cc3b-4aff-9747-d10ef6eeb37d" (UID: "d7066dce-cc3b-4aff-9747-d10ef6eeb37d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.143592 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.143634 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt7df\" (UniqueName: \"kubernetes.io/projected/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-kube-api-access-pt7df\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.143648 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.143660 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7066dce-cc3b-4aff-9747-d10ef6eeb37d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.420948 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-95lqd" event={"ID":"312624db-583e-4c95-b802-248457108339","Type":"ContainerDied","Data":"5f9d773895810d8b59da6beb70264e707f8a4b97db0e3e6ca956df432c80e744"} Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.420971 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-95lqd" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.421003 4735 scope.go:117] "RemoveContainer" containerID="8ffe5e6da487c465e9f8a6c14646a04b374b6d9a576fc79b03f9315dbaf622ce" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.429021 4735 generic.go:334] "Generic (PLEG): container finished" podID="2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8" containerID="23745a1fd89beb43e2f29f60e304138b35b85c6918ddd84e7dc9483848db3b61" exitCode=143 Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.429138 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8","Type":"ContainerDied","Data":"23745a1fd89beb43e2f29f60e304138b35b85c6918ddd84e7dc9483848db3b61"} Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.435017 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vws4l" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.435332 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vws4l" event={"ID":"d7066dce-cc3b-4aff-9747-d10ef6eeb37d","Type":"ContainerDied","Data":"adb2835d8f575ea024cdd31bb2503ba7aa5140bfa372e71758f58b4b5f2e5c93"} Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.435484 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adb2835d8f575ea024cdd31bb2503ba7aa5140bfa372e71758f58b4b5f2e5c93" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.448189 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-dns-swift-storage-0\") pod \"312624db-583e-4c95-b802-248457108339\" (UID: \"312624db-583e-4c95-b802-248457108339\") " Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.448779 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "312624db-583e-4c95-b802-248457108339" (UID: "312624db-583e-4c95-b802-248457108339"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.449358 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/312624db-583e-4c95-b802-248457108339-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.457151 4735 scope.go:117] "RemoveContainer" containerID="c9d09508a0ab9f918f4f9a216cd9723fb49c0424b2deb1cb6f25856e643b653c" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.490467 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 10:03:30 crc kubenswrapper[4735]: E0128 10:03:30.490928 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312624db-583e-4c95-b802-248457108339" containerName="init" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.490952 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="312624db-583e-4c95-b802-248457108339" containerName="init" Jan 28 10:03:30 crc kubenswrapper[4735]: E0128 10:03:30.490965 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7066dce-cc3b-4aff-9747-d10ef6eeb37d" containerName="nova-cell1-conductor-db-sync" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.490974 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7066dce-cc3b-4aff-9747-d10ef6eeb37d" containerName="nova-cell1-conductor-db-sync" Jan 28 10:03:30 crc kubenswrapper[4735]: E0128 10:03:30.490993 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553981e5-1a34-4046-b06f-cb8438042128" containerName="nova-manage" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.491001 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="553981e5-1a34-4046-b06f-cb8438042128" containerName="nova-manage" Jan 28 10:03:30 crc kubenswrapper[4735]: E0128 10:03:30.491034 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312624db-583e-4c95-b802-248457108339" containerName="dnsmasq-dns" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.491042 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="312624db-583e-4c95-b802-248457108339" containerName="dnsmasq-dns" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.491249 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7066dce-cc3b-4aff-9747-d10ef6eeb37d" containerName="nova-cell1-conductor-db-sync" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.491273 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="312624db-583e-4c95-b802-248457108339" containerName="dnsmasq-dns" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.491298 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="553981e5-1a34-4046-b06f-cb8438042128" containerName="nova-manage" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.492305 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.496977 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.552642 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/166e2c48-1a0f-4aad-bc51-26f82159a795-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"166e2c48-1a0f-4aad-bc51-26f82159a795\") " pod="openstack/nova-cell1-conductor-0" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.552800 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gnw2\" (UniqueName: \"kubernetes.io/projected/166e2c48-1a0f-4aad-bc51-26f82159a795-kube-api-access-8gnw2\") pod \"nova-cell1-conductor-0\" (UID: \"166e2c48-1a0f-4aad-bc51-26f82159a795\") " pod="openstack/nova-cell1-conductor-0" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.552860 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/166e2c48-1a0f-4aad-bc51-26f82159a795-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"166e2c48-1a0f-4aad-bc51-26f82159a795\") " pod="openstack/nova-cell1-conductor-0" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.554769 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.654618 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/166e2c48-1a0f-4aad-bc51-26f82159a795-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"166e2c48-1a0f-4aad-bc51-26f82159a795\") " pod="openstack/nova-cell1-conductor-0" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.654823 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/166e2c48-1a0f-4aad-bc51-26f82159a795-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"166e2c48-1a0f-4aad-bc51-26f82159a795\") " pod="openstack/nova-cell1-conductor-0" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.654889 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gnw2\" (UniqueName: \"kubernetes.io/projected/166e2c48-1a0f-4aad-bc51-26f82159a795-kube-api-access-8gnw2\") pod \"nova-cell1-conductor-0\" (UID: \"166e2c48-1a0f-4aad-bc51-26f82159a795\") " pod="openstack/nova-cell1-conductor-0" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.665022 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/166e2c48-1a0f-4aad-bc51-26f82159a795-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"166e2c48-1a0f-4aad-bc51-26f82159a795\") " pod="openstack/nova-cell1-conductor-0" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.676659 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/166e2c48-1a0f-4aad-bc51-26f82159a795-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"166e2c48-1a0f-4aad-bc51-26f82159a795\") " pod="openstack/nova-cell1-conductor-0" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.680181 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gnw2\" (UniqueName: \"kubernetes.io/projected/166e2c48-1a0f-4aad-bc51-26f82159a795-kube-api-access-8gnw2\") pod \"nova-cell1-conductor-0\" (UID: \"166e2c48-1a0f-4aad-bc51-26f82159a795\") " pod="openstack/nova-cell1-conductor-0" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.826445 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.953338 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-95lqd"] Jan 28 10:03:30 crc kubenswrapper[4735]: I0128 10:03:30.961347 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-95lqd"] Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.256622 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.319620 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.370863 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-scripts\") pod \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.371255 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-config-data\") pod \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.371287 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc9tf\" (UniqueName: \"kubernetes.io/projected/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-kube-api-access-zc9tf\") pod \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.371329 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-run-httpd\") pod \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.371367 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-sg-core-conf-yaml\") pod \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.371397 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-log-httpd\") pod \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.371421 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-combined-ca-bundle\") pod \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\" (UID: \"2b5ceddc-88b4-4f42-80a4-0180dc880b6d\") " Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.372070 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2b5ceddc-88b4-4f42-80a4-0180dc880b6d" (UID: "2b5ceddc-88b4-4f42-80a4-0180dc880b6d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.372318 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2b5ceddc-88b4-4f42-80a4-0180dc880b6d" (UID: "2b5ceddc-88b4-4f42-80a4-0180dc880b6d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.376867 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-scripts" (OuterVolumeSpecName: "scripts") pod "2b5ceddc-88b4-4f42-80a4-0180dc880b6d" (UID: "2b5ceddc-88b4-4f42-80a4-0180dc880b6d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.377264 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-kube-api-access-zc9tf" (OuterVolumeSpecName: "kube-api-access-zc9tf") pod "2b5ceddc-88b4-4f42-80a4-0180dc880b6d" (UID: "2b5ceddc-88b4-4f42-80a4-0180dc880b6d"). InnerVolumeSpecName "kube-api-access-zc9tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.424034 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2b5ceddc-88b4-4f42-80a4-0180dc880b6d" (UID: "2b5ceddc-88b4-4f42-80a4-0180dc880b6d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.452007 4735 generic.go:334] "Generic (PLEG): container finished" podID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerID="1d591a3b7486f806970cba4a9127ea982ac5b219f7940c52afbc024dc2734b4f" exitCode=0 Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.452355 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b5ceddc-88b4-4f42-80a4-0180dc880b6d","Type":"ContainerDied","Data":"1d591a3b7486f806970cba4a9127ea982ac5b219f7940c52afbc024dc2734b4f"} Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.452391 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b5ceddc-88b4-4f42-80a4-0180dc880b6d","Type":"ContainerDied","Data":"9cae8f5250723d2c120b5dbe7cb2f95e68cbbaf56122bfd1cfacb66f77933873"} Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.452411 4735 scope.go:117] "RemoveContainer" containerID="f606ad10f863fa54cbe6b78e14638c665804394ef19bccfd7bbfdf6bb6c71a80" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.452617 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.457256 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"166e2c48-1a0f-4aad-bc51-26f82159a795","Type":"ContainerStarted","Data":"ae65de4c75fc7498bd572b6d1e8dad86147de889918e66a8ee61665e9db363d7"} Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.458970 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="10bcdbff-4b2f-4447-81ac-30d03aaef0f9" containerName="nova-scheduler-scheduler" containerID="cri-o://becd72d522ae81542b73db82e5b79f256ff562622135af88c02f7984eb982854" gracePeriod=30 Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.476678 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.476703 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc9tf\" (UniqueName: \"kubernetes.io/projected/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-kube-api-access-zc9tf\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.476714 4735 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.476726 4735 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.476734 4735 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.481240 4735 scope.go:117] "RemoveContainer" containerID="54116d5b91543fdfac32c6d19ab4b37667fad020126d5439a6731b52da3574e0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.489899 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b5ceddc-88b4-4f42-80a4-0180dc880b6d" (UID: "2b5ceddc-88b4-4f42-80a4-0180dc880b6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.502913 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-config-data" (OuterVolumeSpecName: "config-data") pod "2b5ceddc-88b4-4f42-80a4-0180dc880b6d" (UID: "2b5ceddc-88b4-4f42-80a4-0180dc880b6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.502934 4735 scope.go:117] "RemoveContainer" containerID="1d591a3b7486f806970cba4a9127ea982ac5b219f7940c52afbc024dc2734b4f" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.517057 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="312624db-583e-4c95-b802-248457108339" path="/var/lib/kubelet/pods/312624db-583e-4c95-b802-248457108339/volumes" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.540841 4735 scope.go:117] "RemoveContainer" containerID="7046cb9959bc1274ba6fd5a571c2e7319732801d084e981bf5795f34de16d9b8" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.562171 4735 scope.go:117] "RemoveContainer" containerID="f606ad10f863fa54cbe6b78e14638c665804394ef19bccfd7bbfdf6bb6c71a80" Jan 28 10:03:31 crc kubenswrapper[4735]: E0128 10:03:31.562602 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f606ad10f863fa54cbe6b78e14638c665804394ef19bccfd7bbfdf6bb6c71a80\": container with ID starting with f606ad10f863fa54cbe6b78e14638c665804394ef19bccfd7bbfdf6bb6c71a80 not found: ID does not exist" containerID="f606ad10f863fa54cbe6b78e14638c665804394ef19bccfd7bbfdf6bb6c71a80" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.562636 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f606ad10f863fa54cbe6b78e14638c665804394ef19bccfd7bbfdf6bb6c71a80"} err="failed to get container status \"f606ad10f863fa54cbe6b78e14638c665804394ef19bccfd7bbfdf6bb6c71a80\": rpc error: code = NotFound desc = could not find container \"f606ad10f863fa54cbe6b78e14638c665804394ef19bccfd7bbfdf6bb6c71a80\": container with ID starting with f606ad10f863fa54cbe6b78e14638c665804394ef19bccfd7bbfdf6bb6c71a80 not found: ID does not exist" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.562667 4735 scope.go:117] "RemoveContainer" containerID="54116d5b91543fdfac32c6d19ab4b37667fad020126d5439a6731b52da3574e0" Jan 28 10:03:31 crc kubenswrapper[4735]: E0128 10:03:31.562998 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54116d5b91543fdfac32c6d19ab4b37667fad020126d5439a6731b52da3574e0\": container with ID starting with 54116d5b91543fdfac32c6d19ab4b37667fad020126d5439a6731b52da3574e0 not found: ID does not exist" containerID="54116d5b91543fdfac32c6d19ab4b37667fad020126d5439a6731b52da3574e0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.563027 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54116d5b91543fdfac32c6d19ab4b37667fad020126d5439a6731b52da3574e0"} err="failed to get container status \"54116d5b91543fdfac32c6d19ab4b37667fad020126d5439a6731b52da3574e0\": rpc error: code = NotFound desc = could not find container \"54116d5b91543fdfac32c6d19ab4b37667fad020126d5439a6731b52da3574e0\": container with ID starting with 54116d5b91543fdfac32c6d19ab4b37667fad020126d5439a6731b52da3574e0 not found: ID does not exist" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.563045 4735 scope.go:117] "RemoveContainer" containerID="1d591a3b7486f806970cba4a9127ea982ac5b219f7940c52afbc024dc2734b4f" Jan 28 10:03:31 crc kubenswrapper[4735]: E0128 10:03:31.563296 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d591a3b7486f806970cba4a9127ea982ac5b219f7940c52afbc024dc2734b4f\": container with ID starting with 1d591a3b7486f806970cba4a9127ea982ac5b219f7940c52afbc024dc2734b4f not found: ID does not exist" containerID="1d591a3b7486f806970cba4a9127ea982ac5b219f7940c52afbc024dc2734b4f" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.563353 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d591a3b7486f806970cba4a9127ea982ac5b219f7940c52afbc024dc2734b4f"} err="failed to get container status \"1d591a3b7486f806970cba4a9127ea982ac5b219f7940c52afbc024dc2734b4f\": rpc error: code = NotFound desc = could not find container \"1d591a3b7486f806970cba4a9127ea982ac5b219f7940c52afbc024dc2734b4f\": container with ID starting with 1d591a3b7486f806970cba4a9127ea982ac5b219f7940c52afbc024dc2734b4f not found: ID does not exist" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.563387 4735 scope.go:117] "RemoveContainer" containerID="7046cb9959bc1274ba6fd5a571c2e7319732801d084e981bf5795f34de16d9b8" Jan 28 10:03:31 crc kubenswrapper[4735]: E0128 10:03:31.563686 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7046cb9959bc1274ba6fd5a571c2e7319732801d084e981bf5795f34de16d9b8\": container with ID starting with 7046cb9959bc1274ba6fd5a571c2e7319732801d084e981bf5795f34de16d9b8 not found: ID does not exist" containerID="7046cb9959bc1274ba6fd5a571c2e7319732801d084e981bf5795f34de16d9b8" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.563717 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7046cb9959bc1274ba6fd5a571c2e7319732801d084e981bf5795f34de16d9b8"} err="failed to get container status \"7046cb9959bc1274ba6fd5a571c2e7319732801d084e981bf5795f34de16d9b8\": rpc error: code = NotFound desc = could not find container \"7046cb9959bc1274ba6fd5a571c2e7319732801d084e981bf5795f34de16d9b8\": container with ID starting with 7046cb9959bc1274ba6fd5a571c2e7319732801d084e981bf5795f34de16d9b8 not found: ID does not exist" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.577870 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.577904 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5ceddc-88b4-4f42-80a4-0180dc880b6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.800387 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.823586 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.836862 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:03:31 crc kubenswrapper[4735]: E0128 10:03:31.837597 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerName="sg-core" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.837616 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerName="sg-core" Jan 28 10:03:31 crc kubenswrapper[4735]: E0128 10:03:31.837627 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerName="ceilometer-central-agent" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.837633 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerName="ceilometer-central-agent" Jan 28 10:03:31 crc kubenswrapper[4735]: E0128 10:03:31.837640 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerName="proxy-httpd" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.837647 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerName="proxy-httpd" Jan 28 10:03:31 crc kubenswrapper[4735]: E0128 10:03:31.837667 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerName="ceilometer-notification-agent" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.837672 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerName="ceilometer-notification-agent" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.837866 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerName="proxy-httpd" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.837882 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerName="ceilometer-central-agent" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.837894 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerName="sg-core" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.837908 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" containerName="ceilometer-notification-agent" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.839610 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.852403 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.852654 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.853388 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.855380 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.882856 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc307b43-b787-4849-beee-88ffc909eba8-log-httpd\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.882911 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.882991 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.883018 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.883048 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-config-data\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.883078 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-scripts\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.883139 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc307b43-b787-4849-beee-88ffc909eba8-run-httpd\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.883157 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd2nv\" (UniqueName: \"kubernetes.io/projected/dc307b43-b787-4849-beee-88ffc909eba8-kube-api-access-pd2nv\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.984811 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.984899 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.985135 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-config-data\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.985729 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-scripts\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.985926 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc307b43-b787-4849-beee-88ffc909eba8-run-httpd\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.985960 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd2nv\" (UniqueName: \"kubernetes.io/projected/dc307b43-b787-4849-beee-88ffc909eba8-kube-api-access-pd2nv\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.985998 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc307b43-b787-4849-beee-88ffc909eba8-log-httpd\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.986017 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.986619 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc307b43-b787-4849-beee-88ffc909eba8-run-httpd\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.986655 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc307b43-b787-4849-beee-88ffc909eba8-log-httpd\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.989339 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.989894 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-config-data\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.989909 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.990001 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-scripts\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:31 crc kubenswrapper[4735]: I0128 10:03:31.991041 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:32 crc kubenswrapper[4735]: I0128 10:03:32.007788 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd2nv\" (UniqueName: \"kubernetes.io/projected/dc307b43-b787-4849-beee-88ffc909eba8-kube-api-access-pd2nv\") pod \"ceilometer-0\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " pod="openstack/ceilometer-0" Jan 28 10:03:32 crc kubenswrapper[4735]: I0128 10:03:32.171762 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:03:32 crc kubenswrapper[4735]: I0128 10:03:32.471271 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"166e2c48-1a0f-4aad-bc51-26f82159a795","Type":"ContainerStarted","Data":"d6127e2e257d315710879a4b64f2de558cb749d13713613f8c75ff658967ee8f"} Jan 28 10:03:32 crc kubenswrapper[4735]: I0128 10:03:32.471673 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 28 10:03:32 crc kubenswrapper[4735]: I0128 10:03:32.487722 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.487698313 podStartE2EDuration="2.487698313s" podCreationTimestamp="2026-01-28 10:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:03:32.485009228 +0000 UTC m=+1265.634673601" watchObservedRunningTime="2026-01-28 10:03:32.487698313 +0000 UTC m=+1265.637362686" Jan 28 10:03:32 crc kubenswrapper[4735]: I0128 10:03:32.651037 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:03:33 crc kubenswrapper[4735]: I0128 10:03:33.497671 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc307b43-b787-4849-beee-88ffc909eba8","Type":"ContainerStarted","Data":"8848dd821273fe12461deb8bbcdead9b3f24ce8e0cbf3a27ab822eb026d60b17"} Jan 28 10:03:33 crc kubenswrapper[4735]: I0128 10:03:33.500262 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc307b43-b787-4849-beee-88ffc909eba8","Type":"ContainerStarted","Data":"09a671adededd1752bb88a49708ecd15b3cef5ff6248c72123981eba18bab426"} Jan 28 10:03:33 crc kubenswrapper[4735]: I0128 10:03:33.519295 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b5ceddc-88b4-4f42-80a4-0180dc880b6d" path="/var/lib/kubelet/pods/2b5ceddc-88b4-4f42-80a4-0180dc880b6d/volumes" Jan 28 10:03:34 crc kubenswrapper[4735]: E0128 10:03:34.023689 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="becd72d522ae81542b73db82e5b79f256ff562622135af88c02f7984eb982854" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 10:03:34 crc kubenswrapper[4735]: E0128 10:03:34.025290 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="becd72d522ae81542b73db82e5b79f256ff562622135af88c02f7984eb982854" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 10:03:34 crc kubenswrapper[4735]: E0128 10:03:34.028496 4735 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="becd72d522ae81542b73db82e5b79f256ff562622135af88c02f7984eb982854" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 10:03:34 crc kubenswrapper[4735]: E0128 10:03:34.028558 4735 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="10bcdbff-4b2f-4447-81ac-30d03aaef0f9" containerName="nova-scheduler-scheduler" Jan 28 10:03:34 crc kubenswrapper[4735]: I0128 10:03:34.506295 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc307b43-b787-4849-beee-88ffc909eba8","Type":"ContainerStarted","Data":"f9f040f050b85c3d4583578eeca404069f0c2d0a167cbd396f3915e04f167fc0"} Jan 28 10:03:34 crc kubenswrapper[4735]: I0128 10:03:34.989180 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.041585 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-combined-ca-bundle\") pod \"10bcdbff-4b2f-4447-81ac-30d03aaef0f9\" (UID: \"10bcdbff-4b2f-4447-81ac-30d03aaef0f9\") " Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.041739 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-config-data\") pod \"10bcdbff-4b2f-4447-81ac-30d03aaef0f9\" (UID: \"10bcdbff-4b2f-4447-81ac-30d03aaef0f9\") " Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.041836 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td8s7\" (UniqueName: \"kubernetes.io/projected/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-kube-api-access-td8s7\") pod \"10bcdbff-4b2f-4447-81ac-30d03aaef0f9\" (UID: \"10bcdbff-4b2f-4447-81ac-30d03aaef0f9\") " Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.046423 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-kube-api-access-td8s7" (OuterVolumeSpecName: "kube-api-access-td8s7") pod "10bcdbff-4b2f-4447-81ac-30d03aaef0f9" (UID: "10bcdbff-4b2f-4447-81ac-30d03aaef0f9"). InnerVolumeSpecName "kube-api-access-td8s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.108932 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-config-data" (OuterVolumeSpecName: "config-data") pod "10bcdbff-4b2f-4447-81ac-30d03aaef0f9" (UID: "10bcdbff-4b2f-4447-81ac-30d03aaef0f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.114957 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10bcdbff-4b2f-4447-81ac-30d03aaef0f9" (UID: "10bcdbff-4b2f-4447-81ac-30d03aaef0f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.143917 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.143951 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td8s7\" (UniqueName: \"kubernetes.io/projected/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-kube-api-access-td8s7\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.143964 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10bcdbff-4b2f-4447-81ac-30d03aaef0f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.401359 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.449339 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-combined-ca-bundle\") pod \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\" (UID: \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\") " Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.449426 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-logs\") pod \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\" (UID: \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\") " Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.449493 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-config-data\") pod \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\" (UID: \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\") " Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.449585 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xddf7\" (UniqueName: \"kubernetes.io/projected/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-kube-api-access-xddf7\") pod \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\" (UID: \"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8\") " Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.450264 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-logs" (OuterVolumeSpecName: "logs") pod "2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8" (UID: "2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.457072 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-kube-api-access-xddf7" (OuterVolumeSpecName: "kube-api-access-xddf7") pod "2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8" (UID: "2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8"). InnerVolumeSpecName "kube-api-access-xddf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.483016 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-config-data" (OuterVolumeSpecName: "config-data") pod "2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8" (UID: "2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.499152 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8" (UID: "2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.515787 4735 generic.go:334] "Generic (PLEG): container finished" podID="10bcdbff-4b2f-4447-81ac-30d03aaef0f9" containerID="becd72d522ae81542b73db82e5b79f256ff562622135af88c02f7984eb982854" exitCode=0 Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.515894 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.516443 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"10bcdbff-4b2f-4447-81ac-30d03aaef0f9","Type":"ContainerDied","Data":"becd72d522ae81542b73db82e5b79f256ff562622135af88c02f7984eb982854"} Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.516472 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"10bcdbff-4b2f-4447-81ac-30d03aaef0f9","Type":"ContainerDied","Data":"f47e393c1a91722a460f499ab1ba2e3ba13de72fa58618c3588641f817d2008d"} Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.516492 4735 scope.go:117] "RemoveContainer" containerID="becd72d522ae81542b73db82e5b79f256ff562622135af88c02f7984eb982854" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.528053 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc307b43-b787-4849-beee-88ffc909eba8","Type":"ContainerStarted","Data":"781dc387102b10116d7707e33368895dbfd6bb72489bae195dc3ed1d6d725346"} Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.532791 4735 generic.go:334] "Generic (PLEG): container finished" podID="2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8" containerID="71adb30844ccb26c5e445c43bc8a31bdba1da2cc319b7208825df9421a54629b" exitCode=0 Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.532833 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8","Type":"ContainerDied","Data":"71adb30844ccb26c5e445c43bc8a31bdba1da2cc319b7208825df9421a54629b"} Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.532859 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8","Type":"ContainerDied","Data":"dff692ea80f3fcb57a15cdcab336f99acc6610eb293923e756009989bde69e68"} Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.532880 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.554438 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.554489 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.554503 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.554514 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xddf7\" (UniqueName: \"kubernetes.io/projected/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8-kube-api-access-xddf7\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.572035 4735 scope.go:117] "RemoveContainer" containerID="becd72d522ae81542b73db82e5b79f256ff562622135af88c02f7984eb982854" Jan 28 10:03:35 crc kubenswrapper[4735]: E0128 10:03:35.573674 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"becd72d522ae81542b73db82e5b79f256ff562622135af88c02f7984eb982854\": container with ID starting with becd72d522ae81542b73db82e5b79f256ff562622135af88c02f7984eb982854 not found: ID does not exist" containerID="becd72d522ae81542b73db82e5b79f256ff562622135af88c02f7984eb982854" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.573723 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"becd72d522ae81542b73db82e5b79f256ff562622135af88c02f7984eb982854"} err="failed to get container status \"becd72d522ae81542b73db82e5b79f256ff562622135af88c02f7984eb982854\": rpc error: code = NotFound desc = could not find container \"becd72d522ae81542b73db82e5b79f256ff562622135af88c02f7984eb982854\": container with ID starting with becd72d522ae81542b73db82e5b79f256ff562622135af88c02f7984eb982854 not found: ID does not exist" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.573783 4735 scope.go:117] "RemoveContainer" containerID="71adb30844ccb26c5e445c43bc8a31bdba1da2cc319b7208825df9421a54629b" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.590820 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.629556 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.638826 4735 scope.go:117] "RemoveContainer" containerID="23745a1fd89beb43e2f29f60e304138b35b85c6918ddd84e7dc9483848db3b61" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.652252 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.672840 4735 scope.go:117] "RemoveContainer" containerID="71adb30844ccb26c5e445c43bc8a31bdba1da2cc319b7208825df9421a54629b" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.673420 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 10:03:35 crc kubenswrapper[4735]: E0128 10:03:35.674595 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71adb30844ccb26c5e445c43bc8a31bdba1da2cc319b7208825df9421a54629b\": container with ID starting with 71adb30844ccb26c5e445c43bc8a31bdba1da2cc319b7208825df9421a54629b not found: ID does not exist" containerID="71adb30844ccb26c5e445c43bc8a31bdba1da2cc319b7208825df9421a54629b" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.674645 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71adb30844ccb26c5e445c43bc8a31bdba1da2cc319b7208825df9421a54629b"} err="failed to get container status \"71adb30844ccb26c5e445c43bc8a31bdba1da2cc319b7208825df9421a54629b\": rpc error: code = NotFound desc = could not find container \"71adb30844ccb26c5e445c43bc8a31bdba1da2cc319b7208825df9421a54629b\": container with ID starting with 71adb30844ccb26c5e445c43bc8a31bdba1da2cc319b7208825df9421a54629b not found: ID does not exist" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.674677 4735 scope.go:117] "RemoveContainer" containerID="23745a1fd89beb43e2f29f60e304138b35b85c6918ddd84e7dc9483848db3b61" Jan 28 10:03:35 crc kubenswrapper[4735]: E0128 10:03:35.676179 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23745a1fd89beb43e2f29f60e304138b35b85c6918ddd84e7dc9483848db3b61\": container with ID starting with 23745a1fd89beb43e2f29f60e304138b35b85c6918ddd84e7dc9483848db3b61 not found: ID does not exist" containerID="23745a1fd89beb43e2f29f60e304138b35b85c6918ddd84e7dc9483848db3b61" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.676206 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23745a1fd89beb43e2f29f60e304138b35b85c6918ddd84e7dc9483848db3b61"} err="failed to get container status \"23745a1fd89beb43e2f29f60e304138b35b85c6918ddd84e7dc9483848db3b61\": rpc error: code = NotFound desc = could not find container \"23745a1fd89beb43e2f29f60e304138b35b85c6918ddd84e7dc9483848db3b61\": container with ID starting with 23745a1fd89beb43e2f29f60e304138b35b85c6918ddd84e7dc9483848db3b61 not found: ID does not exist" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.686611 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 10:03:35 crc kubenswrapper[4735]: E0128 10:03:35.687060 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10bcdbff-4b2f-4447-81ac-30d03aaef0f9" containerName="nova-scheduler-scheduler" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.687084 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="10bcdbff-4b2f-4447-81ac-30d03aaef0f9" containerName="nova-scheduler-scheduler" Jan 28 10:03:35 crc kubenswrapper[4735]: E0128 10:03:35.687099 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8" containerName="nova-api-log" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.687106 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8" containerName="nova-api-log" Jan 28 10:03:35 crc kubenswrapper[4735]: E0128 10:03:35.687139 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8" containerName="nova-api-api" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.687146 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8" containerName="nova-api-api" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.687415 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8" containerName="nova-api-api" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.687438 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="10bcdbff-4b2f-4447-81ac-30d03aaef0f9" containerName="nova-scheduler-scheduler" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.687456 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8" containerName="nova-api-log" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.688277 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.691039 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.698410 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.713468 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.715435 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.724258 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.745168 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.752194 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.757091 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2273503d-bfed-4918-ab93-40b634819284-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2273503d-bfed-4918-ab93-40b634819284\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.757143 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldll5\" (UniqueName: \"kubernetes.io/projected/2273503d-bfed-4918-ab93-40b634819284-kube-api-access-ldll5\") pod \"nova-scheduler-0\" (UID: \"2273503d-bfed-4918-ab93-40b634819284\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.757179 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2273503d-bfed-4918-ab93-40b634819284-config-data\") pod \"nova-scheduler-0\" (UID: \"2273503d-bfed-4918-ab93-40b634819284\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.757261 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpckt\" (UniqueName: \"kubernetes.io/projected/513109f3-0c25-4e95-bab3-b80b409d96ef-kube-api-access-mpckt\") pod \"nova-api-0\" (UID: \"513109f3-0c25-4e95-bab3-b80b409d96ef\") " pod="openstack/nova-api-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.757304 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/513109f3-0c25-4e95-bab3-b80b409d96ef-config-data\") pod \"nova-api-0\" (UID: \"513109f3-0c25-4e95-bab3-b80b409d96ef\") " pod="openstack/nova-api-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.757345 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/513109f3-0c25-4e95-bab3-b80b409d96ef-logs\") pod \"nova-api-0\" (UID: \"513109f3-0c25-4e95-bab3-b80b409d96ef\") " pod="openstack/nova-api-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.757395 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513109f3-0c25-4e95-bab3-b80b409d96ef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"513109f3-0c25-4e95-bab3-b80b409d96ef\") " pod="openstack/nova-api-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.859272 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldll5\" (UniqueName: \"kubernetes.io/projected/2273503d-bfed-4918-ab93-40b634819284-kube-api-access-ldll5\") pod \"nova-scheduler-0\" (UID: \"2273503d-bfed-4918-ab93-40b634819284\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.859320 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2273503d-bfed-4918-ab93-40b634819284-config-data\") pod \"nova-scheduler-0\" (UID: \"2273503d-bfed-4918-ab93-40b634819284\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.859409 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpckt\" (UniqueName: \"kubernetes.io/projected/513109f3-0c25-4e95-bab3-b80b409d96ef-kube-api-access-mpckt\") pod \"nova-api-0\" (UID: \"513109f3-0c25-4e95-bab3-b80b409d96ef\") " pod="openstack/nova-api-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.859451 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/513109f3-0c25-4e95-bab3-b80b409d96ef-config-data\") pod \"nova-api-0\" (UID: \"513109f3-0c25-4e95-bab3-b80b409d96ef\") " pod="openstack/nova-api-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.859483 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/513109f3-0c25-4e95-bab3-b80b409d96ef-logs\") pod \"nova-api-0\" (UID: \"513109f3-0c25-4e95-bab3-b80b409d96ef\") " pod="openstack/nova-api-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.859520 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513109f3-0c25-4e95-bab3-b80b409d96ef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"513109f3-0c25-4e95-bab3-b80b409d96ef\") " pod="openstack/nova-api-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.859567 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2273503d-bfed-4918-ab93-40b634819284-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2273503d-bfed-4918-ab93-40b634819284\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.861107 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/513109f3-0c25-4e95-bab3-b80b409d96ef-logs\") pod \"nova-api-0\" (UID: \"513109f3-0c25-4e95-bab3-b80b409d96ef\") " pod="openstack/nova-api-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.863956 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513109f3-0c25-4e95-bab3-b80b409d96ef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"513109f3-0c25-4e95-bab3-b80b409d96ef\") " pod="openstack/nova-api-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.865018 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2273503d-bfed-4918-ab93-40b634819284-config-data\") pod \"nova-scheduler-0\" (UID: \"2273503d-bfed-4918-ab93-40b634819284\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.868674 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/513109f3-0c25-4e95-bab3-b80b409d96ef-config-data\") pod \"nova-api-0\" (UID: \"513109f3-0c25-4e95-bab3-b80b409d96ef\") " pod="openstack/nova-api-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.869918 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2273503d-bfed-4918-ab93-40b634819284-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2273503d-bfed-4918-ab93-40b634819284\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.876209 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldll5\" (UniqueName: \"kubernetes.io/projected/2273503d-bfed-4918-ab93-40b634819284-kube-api-access-ldll5\") pod \"nova-scheduler-0\" (UID: \"2273503d-bfed-4918-ab93-40b634819284\") " pod="openstack/nova-scheduler-0" Jan 28 10:03:35 crc kubenswrapper[4735]: I0128 10:03:35.881495 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpckt\" (UniqueName: \"kubernetes.io/projected/513109f3-0c25-4e95-bab3-b80b409d96ef-kube-api-access-mpckt\") pod \"nova-api-0\" (UID: \"513109f3-0c25-4e95-bab3-b80b409d96ef\") " pod="openstack/nova-api-0" Jan 28 10:03:36 crc kubenswrapper[4735]: I0128 10:03:36.032889 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 10:03:36 crc kubenswrapper[4735]: I0128 10:03:36.046494 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 10:03:36 crc kubenswrapper[4735]: I0128 10:03:36.504224 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 10:03:36 crc kubenswrapper[4735]: I0128 10:03:36.548929 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2273503d-bfed-4918-ab93-40b634819284","Type":"ContainerStarted","Data":"23fc75ecf83a7064339886edd4f3799cf674dd90fe6e58401e2ccbe4f3e14ae5"} Jan 28 10:03:36 crc kubenswrapper[4735]: I0128 10:03:36.592266 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 10:03:37 crc kubenswrapper[4735]: I0128 10:03:37.517739 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10bcdbff-4b2f-4447-81ac-30d03aaef0f9" path="/var/lib/kubelet/pods/10bcdbff-4b2f-4447-81ac-30d03aaef0f9/volumes" Jan 28 10:03:37 crc kubenswrapper[4735]: I0128 10:03:37.518651 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8" path="/var/lib/kubelet/pods/2d5487cf-ca71-4fc8-bbe1-c5d87d85e9e8/volumes" Jan 28 10:03:37 crc kubenswrapper[4735]: I0128 10:03:37.563463 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"513109f3-0c25-4e95-bab3-b80b409d96ef","Type":"ContainerStarted","Data":"82541e7dcb48ac84271bf8c17867176bc3087cfccaf2e9fc28ed68d83d080caf"} Jan 28 10:03:37 crc kubenswrapper[4735]: I0128 10:03:37.563530 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"513109f3-0c25-4e95-bab3-b80b409d96ef","Type":"ContainerStarted","Data":"868ffe7195c0ee773006fabf4735b6184498c8362dbbc85a05d01840d2ee8691"} Jan 28 10:03:37 crc kubenswrapper[4735]: I0128 10:03:37.563547 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"513109f3-0c25-4e95-bab3-b80b409d96ef","Type":"ContainerStarted","Data":"d65ad8275a30d11556b9cbd056309c494f1eca132cb3bd518289af3f29330784"} Jan 28 10:03:37 crc kubenswrapper[4735]: I0128 10:03:37.566846 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2273503d-bfed-4918-ab93-40b634819284","Type":"ContainerStarted","Data":"5a6b603f340df67992e2184f565c84c3969362d0aec46fb97a7508adce6f613c"} Jan 28 10:03:37 crc kubenswrapper[4735]: I0128 10:03:37.602772 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.602727568 podStartE2EDuration="2.602727568s" podCreationTimestamp="2026-01-28 10:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:03:37.59845637 +0000 UTC m=+1270.748120743" watchObservedRunningTime="2026-01-28 10:03:37.602727568 +0000 UTC m=+1270.752391941" Jan 28 10:03:37 crc kubenswrapper[4735]: I0128 10:03:37.627564 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.6275355989999998 podStartE2EDuration="2.627535599s" podCreationTimestamp="2026-01-28 10:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:03:37.616988431 +0000 UTC m=+1270.766652814" watchObservedRunningTime="2026-01-28 10:03:37.627535599 +0000 UTC m=+1270.777199972" Jan 28 10:03:40 crc kubenswrapper[4735]: I0128 10:03:40.859213 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 28 10:03:41 crc kubenswrapper[4735]: I0128 10:03:41.033737 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 10:03:41 crc kubenswrapper[4735]: I0128 10:03:41.604953 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc307b43-b787-4849-beee-88ffc909eba8","Type":"ContainerStarted","Data":"8fbcf9ffd056f023f261821810435ceaa4a202ef8db46cdb13cdedf8dbc5a600"} Jan 28 10:03:41 crc kubenswrapper[4735]: I0128 10:03:41.606463 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 10:03:46 crc kubenswrapper[4735]: I0128 10:03:46.033898 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 10:03:46 crc kubenswrapper[4735]: I0128 10:03:46.047726 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 10:03:46 crc kubenswrapper[4735]: I0128 10:03:46.047805 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 10:03:46 crc kubenswrapper[4735]: I0128 10:03:46.064717 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 10:03:46 crc kubenswrapper[4735]: I0128 10:03:46.082027 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.98951102 podStartE2EDuration="15.082002133s" podCreationTimestamp="2026-01-28 10:03:31 +0000 UTC" firstStartedPulling="2026-01-28 10:03:32.656015468 +0000 UTC m=+1265.805679841" lastFinishedPulling="2026-01-28 10:03:40.748506571 +0000 UTC m=+1273.898170954" observedRunningTime="2026-01-28 10:03:41.637286447 +0000 UTC m=+1274.786950820" watchObservedRunningTime="2026-01-28 10:03:46.082002133 +0000 UTC m=+1279.231666536" Jan 28 10:03:46 crc kubenswrapper[4735]: I0128 10:03:46.719476 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 10:03:47 crc kubenswrapper[4735]: I0128 10:03:47.130968 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="513109f3-0c25-4e95-bab3-b80b409d96ef" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 10:03:47 crc kubenswrapper[4735]: I0128 10:03:47.130980 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="513109f3-0c25-4e95-bab3-b80b409d96ef" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 10:03:48 crc kubenswrapper[4735]: E0128 10:03:48.104431 4735 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/cded9864744db9abf2ebc0e20803b2d6727d7835070ebb13365579b5b88e0733/diff" to get inode usage: stat /var/lib/containers/storage/overlay/cded9864744db9abf2ebc0e20803b2d6727d7835070ebb13365579b5b88e0733/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_2b5ceddc-88b4-4f42-80a4-0180dc880b6d/ceilometer-notification-agent/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_2b5ceddc-88b4-4f42-80a4-0180dc880b6d/ceilometer-notification-agent/0.log: no such file or directory Jan 28 10:03:51 crc kubenswrapper[4735]: E0128 10:03:51.655657 4735 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1e5ffa98dc36568ebc6ebd7c8d32f28c453ffe4c1bfb295b19b21c8c3cc9675e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1e5ffa98dc36568ebc6ebd7c8d32f28c453ffe4c1bfb295b19b21c8c3cc9675e/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_dnsmasq-dns-5784cf869f-95lqd_312624db-583e-4c95-b802-248457108339/dnsmasq-dns/0.log" to get inode usage: stat /var/log/pods/openstack_dnsmasq-dns-5784cf869f-95lqd_312624db-583e-4c95-b802-248457108339/dnsmasq-dns/0.log: no such file or directory Jan 28 10:03:53 crc kubenswrapper[4735]: E0128 10:03:53.606520 4735 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2273503d_bfed_4918_ab93_40b634819284.slice/crio-conmon-5a6b603f340df67992e2184f565c84c3969362d0aec46fb97a7508adce6f613c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8b6f417_ef4f_4a3f_8603_7eb8b57714ff.slice/crio-52deaa193137e26997792b53653a7037ec145f886eec81a6d4e38d3cc5e704bd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25319b8b_b6a5_46fc_927f_e98c107bfc0e.slice/crio-conmon-dec6ffdd43f7c0945a00ff0be119826b2bed3b84e45d7c63c867bc5bc194e70b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25319b8b_b6a5_46fc_927f_e98c107bfc0e.slice/crio-dec6ffdd43f7c0945a00ff0be119826b2bed3b84e45d7c63c867bc5bc194e70b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8b6f417_ef4f_4a3f_8603_7eb8b57714ff.slice/crio-conmon-52deaa193137e26997792b53653a7037ec145f886eec81a6d4e38d3cc5e704bd.scope\": RecentStats: unable to find data in memory cache]" Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.768691 4735 generic.go:334] "Generic (PLEG): container finished" podID="d8b6f417-ef4f-4a3f-8603-7eb8b57714ff" containerID="52deaa193137e26997792b53653a7037ec145f886eec81a6d4e38d3cc5e704bd" exitCode=137 Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.768901 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff","Type":"ContainerDied","Data":"52deaa193137e26997792b53653a7037ec145f886eec81a6d4e38d3cc5e704bd"} Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.769117 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff","Type":"ContainerDied","Data":"a50b4b6e361b8b172ed5fe4715cf508dbbf5f55461dc0e143e9ab7d12a6959ff"} Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.769133 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a50b4b6e361b8b172ed5fe4715cf508dbbf5f55461dc0e143e9ab7d12a6959ff" Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.770779 4735 generic.go:334] "Generic (PLEG): container finished" podID="25319b8b-b6a5-46fc-927f-e98c107bfc0e" containerID="dec6ffdd43f7c0945a00ff0be119826b2bed3b84e45d7c63c867bc5bc194e70b" exitCode=137 Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.770806 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25319b8b-b6a5-46fc-927f-e98c107bfc0e","Type":"ContainerDied","Data":"dec6ffdd43f7c0945a00ff0be119826b2bed3b84e45d7c63c867bc5bc194e70b"} Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.770823 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"25319b8b-b6a5-46fc-927f-e98c107bfc0e","Type":"ContainerDied","Data":"9ccbd8331473f662c167b85cd6a295ab4bb1f7d93f8eabc39dc0973cdf5800e7"} Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.770834 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ccbd8331473f662c167b85cd6a295ab4bb1f7d93f8eabc39dc0973cdf5800e7" Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.806865 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.811300 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.918664 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-combined-ca-bundle\") pod \"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff\" (UID: \"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff\") " Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.918763 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-config-data\") pod \"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff\" (UID: \"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff\") " Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.918923 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25319b8b-b6a5-46fc-927f-e98c107bfc0e-logs\") pod \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\" (UID: \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\") " Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.918970 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25319b8b-b6a5-46fc-927f-e98c107bfc0e-config-data\") pod \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\" (UID: \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\") " Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.919021 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chgxs\" (UniqueName: \"kubernetes.io/projected/25319b8b-b6a5-46fc-927f-e98c107bfc0e-kube-api-access-chgxs\") pod \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\" (UID: \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\") " Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.919070 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25319b8b-b6a5-46fc-927f-e98c107bfc0e-combined-ca-bundle\") pod \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\" (UID: \"25319b8b-b6a5-46fc-927f-e98c107bfc0e\") " Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.919137 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs7rw\" (UniqueName: \"kubernetes.io/projected/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-kube-api-access-bs7rw\") pod \"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff\" (UID: \"d8b6f417-ef4f-4a3f-8603-7eb8b57714ff\") " Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.919309 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25319b8b-b6a5-46fc-927f-e98c107bfc0e-logs" (OuterVolumeSpecName: "logs") pod "25319b8b-b6a5-46fc-927f-e98c107bfc0e" (UID: "25319b8b-b6a5-46fc-927f-e98c107bfc0e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.920123 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25319b8b-b6a5-46fc-927f-e98c107bfc0e-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.924916 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-kube-api-access-bs7rw" (OuterVolumeSpecName: "kube-api-access-bs7rw") pod "d8b6f417-ef4f-4a3f-8603-7eb8b57714ff" (UID: "d8b6f417-ef4f-4a3f-8603-7eb8b57714ff"). InnerVolumeSpecName "kube-api-access-bs7rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.930086 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25319b8b-b6a5-46fc-927f-e98c107bfc0e-kube-api-access-chgxs" (OuterVolumeSpecName: "kube-api-access-chgxs") pod "25319b8b-b6a5-46fc-927f-e98c107bfc0e" (UID: "25319b8b-b6a5-46fc-927f-e98c107bfc0e"). InnerVolumeSpecName "kube-api-access-chgxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.948205 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25319b8b-b6a5-46fc-927f-e98c107bfc0e-config-data" (OuterVolumeSpecName: "config-data") pod "25319b8b-b6a5-46fc-927f-e98c107bfc0e" (UID: "25319b8b-b6a5-46fc-927f-e98c107bfc0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.955830 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25319b8b-b6a5-46fc-927f-e98c107bfc0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25319b8b-b6a5-46fc-927f-e98c107bfc0e" (UID: "25319b8b-b6a5-46fc-927f-e98c107bfc0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.958585 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8b6f417-ef4f-4a3f-8603-7eb8b57714ff" (UID: "d8b6f417-ef4f-4a3f-8603-7eb8b57714ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:53 crc kubenswrapper[4735]: I0128 10:03:53.967209 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-config-data" (OuterVolumeSpecName: "config-data") pod "d8b6f417-ef4f-4a3f-8603-7eb8b57714ff" (UID: "d8b6f417-ef4f-4a3f-8603-7eb8b57714ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.022413 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.022448 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.022489 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25319b8b-b6a5-46fc-927f-e98c107bfc0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.022499 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chgxs\" (UniqueName: \"kubernetes.io/projected/25319b8b-b6a5-46fc-927f-e98c107bfc0e-kube-api-access-chgxs\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.022511 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25319b8b-b6a5-46fc-927f-e98c107bfc0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.022519 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs7rw\" (UniqueName: \"kubernetes.io/projected/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff-kube-api-access-bs7rw\") on node \"crc\" DevicePath \"\"" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.779067 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.779068 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.817805 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.834163 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.848816 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.861895 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.887695 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 10:03:54 crc kubenswrapper[4735]: E0128 10:03:54.888126 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b6f417-ef4f-4a3f-8603-7eb8b57714ff" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.888147 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b6f417-ef4f-4a3f-8603-7eb8b57714ff" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 10:03:54 crc kubenswrapper[4735]: E0128 10:03:54.888170 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25319b8b-b6a5-46fc-927f-e98c107bfc0e" containerName="nova-metadata-metadata" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.888181 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="25319b8b-b6a5-46fc-927f-e98c107bfc0e" containerName="nova-metadata-metadata" Jan 28 10:03:54 crc kubenswrapper[4735]: E0128 10:03:54.888216 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25319b8b-b6a5-46fc-927f-e98c107bfc0e" containerName="nova-metadata-log" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.888223 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="25319b8b-b6a5-46fc-927f-e98c107bfc0e" containerName="nova-metadata-log" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.888387 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="25319b8b-b6a5-46fc-927f-e98c107bfc0e" containerName="nova-metadata-log" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.888400 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="25319b8b-b6a5-46fc-927f-e98c107bfc0e" containerName="nova-metadata-metadata" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.888415 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8b6f417-ef4f-4a3f-8603-7eb8b57714ff" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.889003 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.890723 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.891274 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.891524 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.906452 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.908106 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.911226 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.911312 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.917407 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 10:03:54 crc kubenswrapper[4735]: I0128 10:03:54.930533 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.042970 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7ae116-a9d5-4dc3-bc01-3a8fb08af693-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d7ae116-a9d5-4dc3-bc01-3a8fb08af693\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.043340 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-config-data\") pod \"nova-metadata-0\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " pod="openstack/nova-metadata-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.043425 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " pod="openstack/nova-metadata-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.043616 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7mcc\" (UniqueName: \"kubernetes.io/projected/e700d210-fc95-45b8-a479-1fec0e726948-kube-api-access-p7mcc\") pod \"nova-metadata-0\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " pod="openstack/nova-metadata-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.043711 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d7ae116-a9d5-4dc3-bc01-3a8fb08af693-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d7ae116-a9d5-4dc3-bc01-3a8fb08af693\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.043780 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d7ae116-a9d5-4dc3-bc01-3a8fb08af693-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d7ae116-a9d5-4dc3-bc01-3a8fb08af693\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.043822 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e700d210-fc95-45b8-a479-1fec0e726948-logs\") pod \"nova-metadata-0\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " pod="openstack/nova-metadata-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.043935 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " pod="openstack/nova-metadata-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.043994 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtx48\" (UniqueName: \"kubernetes.io/projected/5d7ae116-a9d5-4dc3-bc01-3a8fb08af693-kube-api-access-rtx48\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d7ae116-a9d5-4dc3-bc01-3a8fb08af693\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.044048 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7ae116-a9d5-4dc3-bc01-3a8fb08af693-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d7ae116-a9d5-4dc3-bc01-3a8fb08af693\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.145159 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-config-data\") pod \"nova-metadata-0\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " pod="openstack/nova-metadata-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.145226 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " pod="openstack/nova-metadata-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.145286 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7mcc\" (UniqueName: \"kubernetes.io/projected/e700d210-fc95-45b8-a479-1fec0e726948-kube-api-access-p7mcc\") pod \"nova-metadata-0\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " pod="openstack/nova-metadata-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.145325 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d7ae116-a9d5-4dc3-bc01-3a8fb08af693-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d7ae116-a9d5-4dc3-bc01-3a8fb08af693\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.145345 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d7ae116-a9d5-4dc3-bc01-3a8fb08af693-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d7ae116-a9d5-4dc3-bc01-3a8fb08af693\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.145368 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e700d210-fc95-45b8-a479-1fec0e726948-logs\") pod \"nova-metadata-0\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " pod="openstack/nova-metadata-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.145403 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " pod="openstack/nova-metadata-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.145433 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtx48\" (UniqueName: \"kubernetes.io/projected/5d7ae116-a9d5-4dc3-bc01-3a8fb08af693-kube-api-access-rtx48\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d7ae116-a9d5-4dc3-bc01-3a8fb08af693\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.145464 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7ae116-a9d5-4dc3-bc01-3a8fb08af693-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d7ae116-a9d5-4dc3-bc01-3a8fb08af693\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.145515 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7ae116-a9d5-4dc3-bc01-3a8fb08af693-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d7ae116-a9d5-4dc3-bc01-3a8fb08af693\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.146073 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e700d210-fc95-45b8-a479-1fec0e726948-logs\") pod \"nova-metadata-0\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " pod="openstack/nova-metadata-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.149837 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7ae116-a9d5-4dc3-bc01-3a8fb08af693-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d7ae116-a9d5-4dc3-bc01-3a8fb08af693\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.149974 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " pod="openstack/nova-metadata-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.150355 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " pod="openstack/nova-metadata-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.150889 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-config-data\") pod \"nova-metadata-0\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " pod="openstack/nova-metadata-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.151068 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d7ae116-a9d5-4dc3-bc01-3a8fb08af693-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d7ae116-a9d5-4dc3-bc01-3a8fb08af693\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.151721 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d7ae116-a9d5-4dc3-bc01-3a8fb08af693-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d7ae116-a9d5-4dc3-bc01-3a8fb08af693\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.151893 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7ae116-a9d5-4dc3-bc01-3a8fb08af693-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d7ae116-a9d5-4dc3-bc01-3a8fb08af693\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.162535 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7mcc\" (UniqueName: \"kubernetes.io/projected/e700d210-fc95-45b8-a479-1fec0e726948-kube-api-access-p7mcc\") pod \"nova-metadata-0\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " pod="openstack/nova-metadata-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.167535 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtx48\" (UniqueName: \"kubernetes.io/projected/5d7ae116-a9d5-4dc3-bc01-3a8fb08af693-kube-api-access-rtx48\") pod \"nova-cell1-novncproxy-0\" (UID: \"5d7ae116-a9d5-4dc3-bc01-3a8fb08af693\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.205111 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.232399 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.514548 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25319b8b-b6a5-46fc-927f-e98c107bfc0e" path="/var/lib/kubelet/pods/25319b8b-b6a5-46fc-927f-e98c107bfc0e/volumes" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.516013 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8b6f417-ef4f-4a3f-8603-7eb8b57714ff" path="/var/lib/kubelet/pods/d8b6f417-ef4f-4a3f-8603-7eb8b57714ff/volumes" Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.556072 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.671986 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 10:03:55 crc kubenswrapper[4735]: W0128 10:03:55.686494 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d7ae116_a9d5_4dc3_bc01_3a8fb08af693.slice/crio-828f383cb53775996a519f68e6bcc286767b4d8fabc5169772320290cc4536c3 WatchSource:0}: Error finding container 828f383cb53775996a519f68e6bcc286767b4d8fabc5169772320290cc4536c3: Status 404 returned error can't find the container with id 828f383cb53775996a519f68e6bcc286767b4d8fabc5169772320290cc4536c3 Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.792186 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e700d210-fc95-45b8-a479-1fec0e726948","Type":"ContainerStarted","Data":"2908ef861c3bd4deb2c813735dbade7407cee383782ecb605cd8569523758a35"} Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.792559 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e700d210-fc95-45b8-a479-1fec0e726948","Type":"ContainerStarted","Data":"ecbeb8541c9b245111a5a48565df64b1279cd7d39e80d515be6f1bba3404048b"} Jan 28 10:03:55 crc kubenswrapper[4735]: I0128 10:03:55.793258 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5d7ae116-a9d5-4dc3-bc01-3a8fb08af693","Type":"ContainerStarted","Data":"828f383cb53775996a519f68e6bcc286767b4d8fabc5169772320290cc4536c3"} Jan 28 10:03:56 crc kubenswrapper[4735]: I0128 10:03:56.053800 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 10:03:56 crc kubenswrapper[4735]: I0128 10:03:56.054278 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 10:03:56 crc kubenswrapper[4735]: I0128 10:03:56.054963 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 10:03:56 crc kubenswrapper[4735]: I0128 10:03:56.059357 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 10:03:56 crc kubenswrapper[4735]: I0128 10:03:56.806574 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e700d210-fc95-45b8-a479-1fec0e726948","Type":"ContainerStarted","Data":"78946d13963c4b196c375feb4110dbb4724dd5ab1b52f4830d20ef7adc289848"} Jan 28 10:03:56 crc kubenswrapper[4735]: I0128 10:03:56.814181 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5d7ae116-a9d5-4dc3-bc01-3a8fb08af693","Type":"ContainerStarted","Data":"3df533545f6a5f39b5f20f32d44aeb89558e66a9406ce9bf1e6a016469220a18"} Jan 28 10:03:56 crc kubenswrapper[4735]: I0128 10:03:56.814850 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 10:03:56 crc kubenswrapper[4735]: I0128 10:03:56.820413 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 10:03:56 crc kubenswrapper[4735]: I0128 10:03:56.839720 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.839691962 podStartE2EDuration="2.839691962s" podCreationTimestamp="2026-01-28 10:03:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:03:56.827955721 +0000 UTC m=+1289.977620104" watchObservedRunningTime="2026-01-28 10:03:56.839691962 +0000 UTC m=+1289.989356355" Jan 28 10:03:56 crc kubenswrapper[4735]: I0128 10:03:56.858365 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.858331265 podStartE2EDuration="2.858331265s" podCreationTimestamp="2026-01-28 10:03:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:03:56.84349529 +0000 UTC m=+1289.993159703" watchObservedRunningTime="2026-01-28 10:03:56.858331265 +0000 UTC m=+1290.007995678" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.035234 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-688kc"] Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.036759 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.071737 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-688kc"] Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.191047 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.191093 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-config\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.191117 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg69l\" (UniqueName: \"kubernetes.io/projected/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-kube-api-access-tg69l\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.191359 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.191771 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.191829 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.292964 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.293018 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.293076 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.293100 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-config\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.293119 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg69l\" (UniqueName: \"kubernetes.io/projected/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-kube-api-access-tg69l\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.293179 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.294250 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.297362 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.298040 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.298105 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-config\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.298182 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.320841 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg69l\" (UniqueName: \"kubernetes.io/projected/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-kube-api-access-tg69l\") pod \"dnsmasq-dns-59cf4bdb65-688kc\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.368865 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:03:57 crc kubenswrapper[4735]: I0128 10:03:57.868473 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-688kc"] Jan 28 10:03:57 crc kubenswrapper[4735]: W0128 10:03:57.869361 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5699a8e_2a07_4f65_9512_2d434ff1f8ed.slice/crio-10a333a65f88407ea52b6a7320eb539c07f9d7c82d2e8ebe17e75df8f4ef994e WatchSource:0}: Error finding container 10a333a65f88407ea52b6a7320eb539c07f9d7c82d2e8ebe17e75df8f4ef994e: Status 404 returned error can't find the container with id 10a333a65f88407ea52b6a7320eb539c07f9d7c82d2e8ebe17e75df8f4ef994e Jan 28 10:03:58 crc kubenswrapper[4735]: I0128 10:03:58.831315 4735 generic.go:334] "Generic (PLEG): container finished" podID="e5699a8e-2a07-4f65-9512-2d434ff1f8ed" containerID="9ff2b08081c8d92562c62a7b340c11a9fd50ee84a6755073b7f2e72edecae0bb" exitCode=0 Jan 28 10:03:58 crc kubenswrapper[4735]: I0128 10:03:58.831376 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" event={"ID":"e5699a8e-2a07-4f65-9512-2d434ff1f8ed","Type":"ContainerDied","Data":"9ff2b08081c8d92562c62a7b340c11a9fd50ee84a6755073b7f2e72edecae0bb"} Jan 28 10:03:58 crc kubenswrapper[4735]: I0128 10:03:58.831727 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" event={"ID":"e5699a8e-2a07-4f65-9512-2d434ff1f8ed","Type":"ContainerStarted","Data":"10a333a65f88407ea52b6a7320eb539c07f9d7c82d2e8ebe17e75df8f4ef994e"} Jan 28 10:03:59 crc kubenswrapper[4735]: I0128 10:03:59.073960 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:03:59 crc kubenswrapper[4735]: I0128 10:03:59.075591 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc307b43-b787-4849-beee-88ffc909eba8" containerName="ceilometer-central-agent" containerID="cri-o://8848dd821273fe12461deb8bbcdead9b3f24ce8e0cbf3a27ab822eb026d60b17" gracePeriod=30 Jan 28 10:03:59 crc kubenswrapper[4735]: I0128 10:03:59.075678 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc307b43-b787-4849-beee-88ffc909eba8" containerName="sg-core" containerID="cri-o://781dc387102b10116d7707e33368895dbfd6bb72489bae195dc3ed1d6d725346" gracePeriod=30 Jan 28 10:03:59 crc kubenswrapper[4735]: I0128 10:03:59.075691 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc307b43-b787-4849-beee-88ffc909eba8" containerName="ceilometer-notification-agent" containerID="cri-o://f9f040f050b85c3d4583578eeca404069f0c2d0a167cbd396f3915e04f167fc0" gracePeriod=30 Jan 28 10:03:59 crc kubenswrapper[4735]: I0128 10:03:59.075682 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc307b43-b787-4849-beee-88ffc909eba8" containerName="proxy-httpd" containerID="cri-o://8fbcf9ffd056f023f261821810435ceaa4a202ef8db46cdb13cdedf8dbc5a600" gracePeriod=30 Jan 28 10:03:59 crc kubenswrapper[4735]: I0128 10:03:59.087628 4735 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="dc307b43-b787-4849-beee-88ffc909eba8" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.196:3000/\": EOF" Jan 28 10:03:59 crc kubenswrapper[4735]: I0128 10:03:59.647094 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 10:03:59 crc kubenswrapper[4735]: I0128 10:03:59.843853 4735 generic.go:334] "Generic (PLEG): container finished" podID="dc307b43-b787-4849-beee-88ffc909eba8" containerID="8fbcf9ffd056f023f261821810435ceaa4a202ef8db46cdb13cdedf8dbc5a600" exitCode=0 Jan 28 10:03:59 crc kubenswrapper[4735]: I0128 10:03:59.843885 4735 generic.go:334] "Generic (PLEG): container finished" podID="dc307b43-b787-4849-beee-88ffc909eba8" containerID="781dc387102b10116d7707e33368895dbfd6bb72489bae195dc3ed1d6d725346" exitCode=2 Jan 28 10:03:59 crc kubenswrapper[4735]: I0128 10:03:59.843894 4735 generic.go:334] "Generic (PLEG): container finished" podID="dc307b43-b787-4849-beee-88ffc909eba8" containerID="8848dd821273fe12461deb8bbcdead9b3f24ce8e0cbf3a27ab822eb026d60b17" exitCode=0 Jan 28 10:03:59 crc kubenswrapper[4735]: I0128 10:03:59.843900 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc307b43-b787-4849-beee-88ffc909eba8","Type":"ContainerDied","Data":"8fbcf9ffd056f023f261821810435ceaa4a202ef8db46cdb13cdedf8dbc5a600"} Jan 28 10:03:59 crc kubenswrapper[4735]: I0128 10:03:59.843942 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc307b43-b787-4849-beee-88ffc909eba8","Type":"ContainerDied","Data":"781dc387102b10116d7707e33368895dbfd6bb72489bae195dc3ed1d6d725346"} Jan 28 10:03:59 crc kubenswrapper[4735]: I0128 10:03:59.843954 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc307b43-b787-4849-beee-88ffc909eba8","Type":"ContainerDied","Data":"8848dd821273fe12461deb8bbcdead9b3f24ce8e0cbf3a27ab822eb026d60b17"} Jan 28 10:03:59 crc kubenswrapper[4735]: I0128 10:03:59.846403 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" event={"ID":"e5699a8e-2a07-4f65-9512-2d434ff1f8ed","Type":"ContainerStarted","Data":"93ccf08dea4900791731db9a6ac67a9da70d0cb940fafd8969c2db0d896d92e8"} Jan 28 10:03:59 crc kubenswrapper[4735]: I0128 10:03:59.846615 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="513109f3-0c25-4e95-bab3-b80b409d96ef" containerName="nova-api-log" containerID="cri-o://868ffe7195c0ee773006fabf4735b6184498c8362dbbc85a05d01840d2ee8691" gracePeriod=30 Jan 28 10:03:59 crc kubenswrapper[4735]: I0128 10:03:59.846802 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="513109f3-0c25-4e95-bab3-b80b409d96ef" containerName="nova-api-api" containerID="cri-o://82541e7dcb48ac84271bf8c17867176bc3087cfccaf2e9fc28ed68d83d080caf" gracePeriod=30 Jan 28 10:03:59 crc kubenswrapper[4735]: I0128 10:03:59.880999 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" podStartSLOduration=2.880982273 podStartE2EDuration="2.880982273s" podCreationTimestamp="2026-01-28 10:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:03:59.875300776 +0000 UTC m=+1293.024965149" watchObservedRunningTime="2026-01-28 10:03:59.880982273 +0000 UTC m=+1293.030646646" Jan 28 10:04:00 crc kubenswrapper[4735]: I0128 10:04:00.206135 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:04:00 crc kubenswrapper[4735]: I0128 10:04:00.233014 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 10:04:00 crc kubenswrapper[4735]: I0128 10:04:00.233082 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 10:04:00 crc kubenswrapper[4735]: I0128 10:04:00.871478 4735 generic.go:334] "Generic (PLEG): container finished" podID="dc307b43-b787-4849-beee-88ffc909eba8" containerID="f9f040f050b85c3d4583578eeca404069f0c2d0a167cbd396f3915e04f167fc0" exitCode=0 Jan 28 10:04:00 crc kubenswrapper[4735]: I0128 10:04:00.871873 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc307b43-b787-4849-beee-88ffc909eba8","Type":"ContainerDied","Data":"f9f040f050b85c3d4583578eeca404069f0c2d0a167cbd396f3915e04f167fc0"} Jan 28 10:04:00 crc kubenswrapper[4735]: I0128 10:04:00.871899 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc307b43-b787-4849-beee-88ffc909eba8","Type":"ContainerDied","Data":"09a671adededd1752bb88a49708ecd15b3cef5ff6248c72123981eba18bab426"} Jan 28 10:04:00 crc kubenswrapper[4735]: I0128 10:04:00.871935 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09a671adededd1752bb88a49708ecd15b3cef5ff6248c72123981eba18bab426" Jan 28 10:04:00 crc kubenswrapper[4735]: I0128 10:04:00.880880 4735 generic.go:334] "Generic (PLEG): container finished" podID="513109f3-0c25-4e95-bab3-b80b409d96ef" containerID="868ffe7195c0ee773006fabf4735b6184498c8362dbbc85a05d01840d2ee8691" exitCode=143 Jan 28 10:04:00 crc kubenswrapper[4735]: I0128 10:04:00.880938 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"513109f3-0c25-4e95-bab3-b80b409d96ef","Type":"ContainerDied","Data":"868ffe7195c0ee773006fabf4735b6184498c8362dbbc85a05d01840d2ee8691"} Jan 28 10:04:00 crc kubenswrapper[4735]: I0128 10:04:00.882937 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:04:00 crc kubenswrapper[4735]: I0128 10:04:00.883814 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.070149 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-ceilometer-tls-certs\") pod \"dc307b43-b787-4849-beee-88ffc909eba8\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.070205 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-sg-core-conf-yaml\") pod \"dc307b43-b787-4849-beee-88ffc909eba8\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.070317 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-scripts\") pod \"dc307b43-b787-4849-beee-88ffc909eba8\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.070348 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-config-data\") pod \"dc307b43-b787-4849-beee-88ffc909eba8\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.070388 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc307b43-b787-4849-beee-88ffc909eba8-log-httpd\") pod \"dc307b43-b787-4849-beee-88ffc909eba8\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.070421 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc307b43-b787-4849-beee-88ffc909eba8-run-httpd\") pod \"dc307b43-b787-4849-beee-88ffc909eba8\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.070475 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-combined-ca-bundle\") pod \"dc307b43-b787-4849-beee-88ffc909eba8\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.070580 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd2nv\" (UniqueName: \"kubernetes.io/projected/dc307b43-b787-4849-beee-88ffc909eba8-kube-api-access-pd2nv\") pod \"dc307b43-b787-4849-beee-88ffc909eba8\" (UID: \"dc307b43-b787-4849-beee-88ffc909eba8\") " Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.071592 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc307b43-b787-4849-beee-88ffc909eba8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dc307b43-b787-4849-beee-88ffc909eba8" (UID: "dc307b43-b787-4849-beee-88ffc909eba8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.071709 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc307b43-b787-4849-beee-88ffc909eba8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dc307b43-b787-4849-beee-88ffc909eba8" (UID: "dc307b43-b787-4849-beee-88ffc909eba8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.082103 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-scripts" (OuterVolumeSpecName: "scripts") pod "dc307b43-b787-4849-beee-88ffc909eba8" (UID: "dc307b43-b787-4849-beee-88ffc909eba8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.084956 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc307b43-b787-4849-beee-88ffc909eba8-kube-api-access-pd2nv" (OuterVolumeSpecName: "kube-api-access-pd2nv") pod "dc307b43-b787-4849-beee-88ffc909eba8" (UID: "dc307b43-b787-4849-beee-88ffc909eba8"). InnerVolumeSpecName "kube-api-access-pd2nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.126799 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dc307b43-b787-4849-beee-88ffc909eba8" (UID: "dc307b43-b787-4849-beee-88ffc909eba8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.135700 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "dc307b43-b787-4849-beee-88ffc909eba8" (UID: "dc307b43-b787-4849-beee-88ffc909eba8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.206442 4735 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.206478 4735 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.206489 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.206501 4735 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc307b43-b787-4849-beee-88ffc909eba8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.206514 4735 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc307b43-b787-4849-beee-88ffc909eba8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.206524 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd2nv\" (UniqueName: \"kubernetes.io/projected/dc307b43-b787-4849-beee-88ffc909eba8-kube-api-access-pd2nv\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.210299 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc307b43-b787-4849-beee-88ffc909eba8" (UID: "dc307b43-b787-4849-beee-88ffc909eba8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.232901 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-config-data" (OuterVolumeSpecName: "config-data") pod "dc307b43-b787-4849-beee-88ffc909eba8" (UID: "dc307b43-b787-4849-beee-88ffc909eba8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.307989 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.308020 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc307b43-b787-4849-beee-88ffc909eba8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.889717 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.924176 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.952364 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.965840 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:04:01 crc kubenswrapper[4735]: E0128 10:04:01.966333 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc307b43-b787-4849-beee-88ffc909eba8" containerName="proxy-httpd" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.966356 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc307b43-b787-4849-beee-88ffc909eba8" containerName="proxy-httpd" Jan 28 10:04:01 crc kubenswrapper[4735]: E0128 10:04:01.966370 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc307b43-b787-4849-beee-88ffc909eba8" containerName="ceilometer-notification-agent" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.966378 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc307b43-b787-4849-beee-88ffc909eba8" containerName="ceilometer-notification-agent" Jan 28 10:04:01 crc kubenswrapper[4735]: E0128 10:04:01.966392 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc307b43-b787-4849-beee-88ffc909eba8" containerName="ceilometer-central-agent" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.966400 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc307b43-b787-4849-beee-88ffc909eba8" containerName="ceilometer-central-agent" Jan 28 10:04:01 crc kubenswrapper[4735]: E0128 10:04:01.966412 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc307b43-b787-4849-beee-88ffc909eba8" containerName="sg-core" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.966419 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc307b43-b787-4849-beee-88ffc909eba8" containerName="sg-core" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.966641 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc307b43-b787-4849-beee-88ffc909eba8" containerName="ceilometer-notification-agent" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.966672 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc307b43-b787-4849-beee-88ffc909eba8" containerName="proxy-httpd" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.966687 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc307b43-b787-4849-beee-88ffc909eba8" containerName="sg-core" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.966701 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc307b43-b787-4849-beee-88ffc909eba8" containerName="ceilometer-central-agent" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.968416 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.969988 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.971596 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.972661 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 10:04:01 crc kubenswrapper[4735]: I0128 10:04:01.972861 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.120655 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/923bb257-f04f-4939-ad8a-10c4e6cc8aec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.120994 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/923bb257-f04f-4939-ad8a-10c4e6cc8aec-config-data\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.121076 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/923bb257-f04f-4939-ad8a-10c4e6cc8aec-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.121202 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/923bb257-f04f-4939-ad8a-10c4e6cc8aec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.121316 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqdpb\" (UniqueName: \"kubernetes.io/projected/923bb257-f04f-4939-ad8a-10c4e6cc8aec-kube-api-access-cqdpb\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.121419 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/923bb257-f04f-4939-ad8a-10c4e6cc8aec-log-httpd\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.121640 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/923bb257-f04f-4939-ad8a-10c4e6cc8aec-run-httpd\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.121730 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/923bb257-f04f-4939-ad8a-10c4e6cc8aec-scripts\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.223267 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/923bb257-f04f-4939-ad8a-10c4e6cc8aec-log-httpd\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.223339 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/923bb257-f04f-4939-ad8a-10c4e6cc8aec-run-httpd\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.223376 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/923bb257-f04f-4939-ad8a-10c4e6cc8aec-scripts\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.223411 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/923bb257-f04f-4939-ad8a-10c4e6cc8aec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.223496 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/923bb257-f04f-4939-ad8a-10c4e6cc8aec-config-data\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.223515 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/923bb257-f04f-4939-ad8a-10c4e6cc8aec-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.223547 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/923bb257-f04f-4939-ad8a-10c4e6cc8aec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.223567 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqdpb\" (UniqueName: \"kubernetes.io/projected/923bb257-f04f-4939-ad8a-10c4e6cc8aec-kube-api-access-cqdpb\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.223962 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/923bb257-f04f-4939-ad8a-10c4e6cc8aec-log-httpd\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.224328 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/923bb257-f04f-4939-ad8a-10c4e6cc8aec-run-httpd\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.230210 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/923bb257-f04f-4939-ad8a-10c4e6cc8aec-scripts\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.230787 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/923bb257-f04f-4939-ad8a-10c4e6cc8aec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.230914 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/923bb257-f04f-4939-ad8a-10c4e6cc8aec-config-data\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.232475 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/923bb257-f04f-4939-ad8a-10c4e6cc8aec-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.233434 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/923bb257-f04f-4939-ad8a-10c4e6cc8aec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.244228 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqdpb\" (UniqueName: \"kubernetes.io/projected/923bb257-f04f-4939-ad8a-10c4e6cc8aec-kube-api-access-cqdpb\") pod \"ceilometer-0\" (UID: \"923bb257-f04f-4939-ad8a-10c4e6cc8aec\") " pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.285670 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.779871 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 10:04:02 crc kubenswrapper[4735]: W0128 10:04:02.792853 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod923bb257_f04f_4939_ad8a_10c4e6cc8aec.slice/crio-5e1b7d7b1a3a1ee3753795fa9c9414f7c106a83cef9923a087e8157e36f782be WatchSource:0}: Error finding container 5e1b7d7b1a3a1ee3753795fa9c9414f7c106a83cef9923a087e8157e36f782be: Status 404 returned error can't find the container with id 5e1b7d7b1a3a1ee3753795fa9c9414f7c106a83cef9923a087e8157e36f782be Jan 28 10:04:02 crc kubenswrapper[4735]: I0128 10:04:02.898673 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"923bb257-f04f-4939-ad8a-10c4e6cc8aec","Type":"ContainerStarted","Data":"5e1b7d7b1a3a1ee3753795fa9c9414f7c106a83cef9923a087e8157e36f782be"} Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.397002 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.451087 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513109f3-0c25-4e95-bab3-b80b409d96ef-combined-ca-bundle\") pod \"513109f3-0c25-4e95-bab3-b80b409d96ef\" (UID: \"513109f3-0c25-4e95-bab3-b80b409d96ef\") " Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.451226 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpckt\" (UniqueName: \"kubernetes.io/projected/513109f3-0c25-4e95-bab3-b80b409d96ef-kube-api-access-mpckt\") pod \"513109f3-0c25-4e95-bab3-b80b409d96ef\" (UID: \"513109f3-0c25-4e95-bab3-b80b409d96ef\") " Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.451269 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/513109f3-0c25-4e95-bab3-b80b409d96ef-config-data\") pod \"513109f3-0c25-4e95-bab3-b80b409d96ef\" (UID: \"513109f3-0c25-4e95-bab3-b80b409d96ef\") " Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.451341 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/513109f3-0c25-4e95-bab3-b80b409d96ef-logs\") pod \"513109f3-0c25-4e95-bab3-b80b409d96ef\" (UID: \"513109f3-0c25-4e95-bab3-b80b409d96ef\") " Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.452491 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/513109f3-0c25-4e95-bab3-b80b409d96ef-logs" (OuterVolumeSpecName: "logs") pod "513109f3-0c25-4e95-bab3-b80b409d96ef" (UID: "513109f3-0c25-4e95-bab3-b80b409d96ef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.468209 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/513109f3-0c25-4e95-bab3-b80b409d96ef-kube-api-access-mpckt" (OuterVolumeSpecName: "kube-api-access-mpckt") pod "513109f3-0c25-4e95-bab3-b80b409d96ef" (UID: "513109f3-0c25-4e95-bab3-b80b409d96ef"). InnerVolumeSpecName "kube-api-access-mpckt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.478239 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/513109f3-0c25-4e95-bab3-b80b409d96ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "513109f3-0c25-4e95-bab3-b80b409d96ef" (UID: "513109f3-0c25-4e95-bab3-b80b409d96ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.497864 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/513109f3-0c25-4e95-bab3-b80b409d96ef-config-data" (OuterVolumeSpecName: "config-data") pod "513109f3-0c25-4e95-bab3-b80b409d96ef" (UID: "513109f3-0c25-4e95-bab3-b80b409d96ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.515456 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc307b43-b787-4849-beee-88ffc909eba8" path="/var/lib/kubelet/pods/dc307b43-b787-4849-beee-88ffc909eba8/volumes" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.553683 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513109f3-0c25-4e95-bab3-b80b409d96ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.553720 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpckt\" (UniqueName: \"kubernetes.io/projected/513109f3-0c25-4e95-bab3-b80b409d96ef-kube-api-access-mpckt\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.553732 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/513109f3-0c25-4e95-bab3-b80b409d96ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.553740 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/513109f3-0c25-4e95-bab3-b80b409d96ef-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.913250 4735 generic.go:334] "Generic (PLEG): container finished" podID="513109f3-0c25-4e95-bab3-b80b409d96ef" containerID="82541e7dcb48ac84271bf8c17867176bc3087cfccaf2e9fc28ed68d83d080caf" exitCode=0 Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.913304 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.913329 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"513109f3-0c25-4e95-bab3-b80b409d96ef","Type":"ContainerDied","Data":"82541e7dcb48ac84271bf8c17867176bc3087cfccaf2e9fc28ed68d83d080caf"} Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.913773 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"513109f3-0c25-4e95-bab3-b80b409d96ef","Type":"ContainerDied","Data":"d65ad8275a30d11556b9cbd056309c494f1eca132cb3bd518289af3f29330784"} Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.913800 4735 scope.go:117] "RemoveContainer" containerID="82541e7dcb48ac84271bf8c17867176bc3087cfccaf2e9fc28ed68d83d080caf" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.920965 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"923bb257-f04f-4939-ad8a-10c4e6cc8aec","Type":"ContainerStarted","Data":"16199f692caba2d8ed223166295fae76662cac8329b09346afb7ea886ebf8d9b"} Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.955453 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.970824 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.971215 4735 scope.go:117] "RemoveContainer" containerID="868ffe7195c0ee773006fabf4735b6184498c8362dbbc85a05d01840d2ee8691" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.985831 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 10:04:03 crc kubenswrapper[4735]: E0128 10:04:03.986407 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513109f3-0c25-4e95-bab3-b80b409d96ef" containerName="nova-api-log" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.986428 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="513109f3-0c25-4e95-bab3-b80b409d96ef" containerName="nova-api-log" Jan 28 10:04:03 crc kubenswrapper[4735]: E0128 10:04:03.986475 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513109f3-0c25-4e95-bab3-b80b409d96ef" containerName="nova-api-api" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.986484 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="513109f3-0c25-4e95-bab3-b80b409d96ef" containerName="nova-api-api" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.986705 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="513109f3-0c25-4e95-bab3-b80b409d96ef" containerName="nova-api-log" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.986724 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="513109f3-0c25-4e95-bab3-b80b409d96ef" containerName="nova-api-api" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.988211 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.991672 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.992141 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.995838 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 10:04:03 crc kubenswrapper[4735]: I0128 10:04:03.996424 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.023084 4735 scope.go:117] "RemoveContainer" containerID="82541e7dcb48ac84271bf8c17867176bc3087cfccaf2e9fc28ed68d83d080caf" Jan 28 10:04:04 crc kubenswrapper[4735]: E0128 10:04:04.023653 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82541e7dcb48ac84271bf8c17867176bc3087cfccaf2e9fc28ed68d83d080caf\": container with ID starting with 82541e7dcb48ac84271bf8c17867176bc3087cfccaf2e9fc28ed68d83d080caf not found: ID does not exist" containerID="82541e7dcb48ac84271bf8c17867176bc3087cfccaf2e9fc28ed68d83d080caf" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.023693 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82541e7dcb48ac84271bf8c17867176bc3087cfccaf2e9fc28ed68d83d080caf"} err="failed to get container status \"82541e7dcb48ac84271bf8c17867176bc3087cfccaf2e9fc28ed68d83d080caf\": rpc error: code = NotFound desc = could not find container \"82541e7dcb48ac84271bf8c17867176bc3087cfccaf2e9fc28ed68d83d080caf\": container with ID starting with 82541e7dcb48ac84271bf8c17867176bc3087cfccaf2e9fc28ed68d83d080caf not found: ID does not exist" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.023723 4735 scope.go:117] "RemoveContainer" containerID="868ffe7195c0ee773006fabf4735b6184498c8362dbbc85a05d01840d2ee8691" Jan 28 10:04:04 crc kubenswrapper[4735]: E0128 10:04:04.024172 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"868ffe7195c0ee773006fabf4735b6184498c8362dbbc85a05d01840d2ee8691\": container with ID starting with 868ffe7195c0ee773006fabf4735b6184498c8362dbbc85a05d01840d2ee8691 not found: ID does not exist" containerID="868ffe7195c0ee773006fabf4735b6184498c8362dbbc85a05d01840d2ee8691" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.024221 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"868ffe7195c0ee773006fabf4735b6184498c8362dbbc85a05d01840d2ee8691"} err="failed to get container status \"868ffe7195c0ee773006fabf4735b6184498c8362dbbc85a05d01840d2ee8691\": rpc error: code = NotFound desc = could not find container \"868ffe7195c0ee773006fabf4735b6184498c8362dbbc85a05d01840d2ee8691\": container with ID starting with 868ffe7195c0ee773006fabf4735b6184498c8362dbbc85a05d01840d2ee8691 not found: ID does not exist" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.065334 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.065485 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.065518 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-public-tls-certs\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.065675 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c24c57a1-0cee-4143-a2cd-f7151e519ca3-logs\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.065804 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9d2w\" (UniqueName: \"kubernetes.io/projected/c24c57a1-0cee-4143-a2cd-f7151e519ca3-kube-api-access-n9d2w\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.065982 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-config-data\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.167405 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c24c57a1-0cee-4143-a2cd-f7151e519ca3-logs\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.167719 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9d2w\" (UniqueName: \"kubernetes.io/projected/c24c57a1-0cee-4143-a2cd-f7151e519ca3-kube-api-access-n9d2w\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.167787 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-config-data\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.167851 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.168063 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c24c57a1-0cee-4143-a2cd-f7151e519ca3-logs\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.168487 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.168525 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-public-tls-certs\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.172724 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.173242 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-config-data\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.173904 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-public-tls-certs\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.173955 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.185452 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9d2w\" (UniqueName: \"kubernetes.io/projected/c24c57a1-0cee-4143-a2cd-f7151e519ca3-kube-api-access-n9d2w\") pod \"nova-api-0\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.319887 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.816797 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.935362 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c24c57a1-0cee-4143-a2cd-f7151e519ca3","Type":"ContainerStarted","Data":"8f09a212414f339d01afcb88bc987daf3c7e1160a7142c7f890246d9c79fbded"} Jan 28 10:04:04 crc kubenswrapper[4735]: I0128 10:04:04.937380 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"923bb257-f04f-4939-ad8a-10c4e6cc8aec","Type":"ContainerStarted","Data":"071918f47ed99d3b90a8040f5f55f4d4ecf32d60fc0490ce7ceb97b4bed10c5c"} Jan 28 10:04:05 crc kubenswrapper[4735]: I0128 10:04:05.205681 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:04:05 crc kubenswrapper[4735]: I0128 10:04:05.225087 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:04:05 crc kubenswrapper[4735]: I0128 10:04:05.233202 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 10:04:05 crc kubenswrapper[4735]: I0128 10:04:05.233259 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 10:04:05 crc kubenswrapper[4735]: I0128 10:04:05.430170 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:04:05 crc kubenswrapper[4735]: I0128 10:04:05.430547 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:04:05 crc kubenswrapper[4735]: I0128 10:04:05.519231 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="513109f3-0c25-4e95-bab3-b80b409d96ef" path="/var/lib/kubelet/pods/513109f3-0c25-4e95-bab3-b80b409d96ef/volumes" Jan 28 10:04:05 crc kubenswrapper[4735]: I0128 10:04:05.952197 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c24c57a1-0cee-4143-a2cd-f7151e519ca3","Type":"ContainerStarted","Data":"48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e"} Jan 28 10:04:05 crc kubenswrapper[4735]: I0128 10:04:05.952265 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c24c57a1-0cee-4143-a2cd-f7151e519ca3","Type":"ContainerStarted","Data":"0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0"} Jan 28 10:04:05 crc kubenswrapper[4735]: I0128 10:04:05.955377 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"923bb257-f04f-4939-ad8a-10c4e6cc8aec","Type":"ContainerStarted","Data":"39f5f3de7dc4bb01348bf818e04ced2d66e94910caa5b37cfdee9f1eb74c2e8b"} Jan 28 10:04:05 crc kubenswrapper[4735]: I0128 10:04:05.970977 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.970957954 podStartE2EDuration="2.970957954s" podCreationTimestamp="2026-01-28 10:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:04:05.969909429 +0000 UTC m=+1299.119573812" watchObservedRunningTime="2026-01-28 10:04:05.970957954 +0000 UTC m=+1299.120622327" Jan 28 10:04:05 crc kubenswrapper[4735]: I0128 10:04:05.973167 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.136971 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-bzs7q"] Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.138203 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bzs7q" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.142696 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.142987 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.165799 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bzs7q"] Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.229609 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-config-data\") pod \"nova-cell1-cell-mapping-bzs7q\" (UID: \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\") " pod="openstack/nova-cell1-cell-mapping-bzs7q" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.229666 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bzs7q\" (UID: \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\") " pod="openstack/nova-cell1-cell-mapping-bzs7q" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.229709 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-scripts\") pod \"nova-cell1-cell-mapping-bzs7q\" (UID: \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\") " pod="openstack/nova-cell1-cell-mapping-bzs7q" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.229766 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr2tq\" (UniqueName: \"kubernetes.io/projected/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-kube-api-access-fr2tq\") pod \"nova-cell1-cell-mapping-bzs7q\" (UID: \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\") " pod="openstack/nova-cell1-cell-mapping-bzs7q" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.248917 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e700d210-fc95-45b8-a479-1fec0e726948" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.248978 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e700d210-fc95-45b8-a479-1fec0e726948" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.331192 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-scripts\") pod \"nova-cell1-cell-mapping-bzs7q\" (UID: \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\") " pod="openstack/nova-cell1-cell-mapping-bzs7q" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.331269 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr2tq\" (UniqueName: \"kubernetes.io/projected/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-kube-api-access-fr2tq\") pod \"nova-cell1-cell-mapping-bzs7q\" (UID: \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\") " pod="openstack/nova-cell1-cell-mapping-bzs7q" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.331429 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-config-data\") pod \"nova-cell1-cell-mapping-bzs7q\" (UID: \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\") " pod="openstack/nova-cell1-cell-mapping-bzs7q" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.331467 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bzs7q\" (UID: \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\") " pod="openstack/nova-cell1-cell-mapping-bzs7q" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.337432 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bzs7q\" (UID: \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\") " pod="openstack/nova-cell1-cell-mapping-bzs7q" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.344826 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-scripts\") pod \"nova-cell1-cell-mapping-bzs7q\" (UID: \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\") " pod="openstack/nova-cell1-cell-mapping-bzs7q" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.346425 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-config-data\") pod \"nova-cell1-cell-mapping-bzs7q\" (UID: \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\") " pod="openstack/nova-cell1-cell-mapping-bzs7q" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.353161 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr2tq\" (UniqueName: \"kubernetes.io/projected/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-kube-api-access-fr2tq\") pod \"nova-cell1-cell-mapping-bzs7q\" (UID: \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\") " pod="openstack/nova-cell1-cell-mapping-bzs7q" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.472365 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bzs7q" Jan 28 10:04:06 crc kubenswrapper[4735]: I0128 10:04:06.972500 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bzs7q"] Jan 28 10:04:07 crc kubenswrapper[4735]: I0128 10:04:07.371118 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:04:07 crc kubenswrapper[4735]: I0128 10:04:07.432592 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lqz84"] Jan 28 10:04:07 crc kubenswrapper[4735]: I0128 10:04:07.432904 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" podUID="1a945053-f5d2-4d71-9e21-fdc3d7926265" containerName="dnsmasq-dns" containerID="cri-o://8c8edb4dd8fafb773393c97c22b4a0d6ebc58aa04024b271a6e016de3bdaf5b5" gracePeriod=10 Jan 28 10:04:07 crc kubenswrapper[4735]: I0128 10:04:07.967558 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:04:07 crc kubenswrapper[4735]: I0128 10:04:07.993153 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"923bb257-f04f-4939-ad8a-10c4e6cc8aec","Type":"ContainerStarted","Data":"42e5e4a5d2d358471078cc9c57eae3236847efd4ef5dc7f60be8162377a9f99c"} Jan 28 10:04:07 crc kubenswrapper[4735]: I0128 10:04:07.993265 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 10:04:07 crc kubenswrapper[4735]: I0128 10:04:07.996459 4735 generic.go:334] "Generic (PLEG): container finished" podID="1a945053-f5d2-4d71-9e21-fdc3d7926265" containerID="8c8edb4dd8fafb773393c97c22b4a0d6ebc58aa04024b271a6e016de3bdaf5b5" exitCode=0 Jan 28 10:04:07 crc kubenswrapper[4735]: I0128 10:04:07.996535 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" Jan 28 10:04:07 crc kubenswrapper[4735]: I0128 10:04:07.996531 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" event={"ID":"1a945053-f5d2-4d71-9e21-fdc3d7926265","Type":"ContainerDied","Data":"8c8edb4dd8fafb773393c97c22b4a0d6ebc58aa04024b271a6e016de3bdaf5b5"} Jan 28 10:04:07 crc kubenswrapper[4735]: I0128 10:04:07.996607 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lqz84" event={"ID":"1a945053-f5d2-4d71-9e21-fdc3d7926265","Type":"ContainerDied","Data":"0dc7f685b1923c6448fac8dcc91324dcde6bceedd4e83c83a4c1ca289c382342"} Jan 28 10:04:07 crc kubenswrapper[4735]: I0128 10:04:07.996638 4735 scope.go:117] "RemoveContainer" containerID="8c8edb4dd8fafb773393c97c22b4a0d6ebc58aa04024b271a6e016de3bdaf5b5" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.000630 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bzs7q" event={"ID":"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1","Type":"ContainerStarted","Data":"2261bd57f64694e6e287a80d787d85735187d87a17b18b03b8acb5b80346e30c"} Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.000681 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bzs7q" event={"ID":"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1","Type":"ContainerStarted","Data":"c7106e2216aab8b2969b8aadefce61e46116557e09e70000953f081eeed82763"} Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.024272 4735 scope.go:117] "RemoveContainer" containerID="edd8d69334f91bdee372fba13df8b221d8cce87c0244d29bf731a7005bb9938a" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.062385 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-dns-swift-storage-0\") pod \"1a945053-f5d2-4d71-9e21-fdc3d7926265\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.062469 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-dns-svc\") pod \"1a945053-f5d2-4d71-9e21-fdc3d7926265\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.062522 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-ovsdbserver-sb\") pod \"1a945053-f5d2-4d71-9e21-fdc3d7926265\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.062811 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-config\") pod \"1a945053-f5d2-4d71-9e21-fdc3d7926265\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.063099 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl7f4\" (UniqueName: \"kubernetes.io/projected/1a945053-f5d2-4d71-9e21-fdc3d7926265-kube-api-access-jl7f4\") pod \"1a945053-f5d2-4d71-9e21-fdc3d7926265\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.063189 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-ovsdbserver-nb\") pod \"1a945053-f5d2-4d71-9e21-fdc3d7926265\" (UID: \"1a945053-f5d2-4d71-9e21-fdc3d7926265\") " Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.065514 4735 scope.go:117] "RemoveContainer" containerID="8c8edb4dd8fafb773393c97c22b4a0d6ebc58aa04024b271a6e016de3bdaf5b5" Jan 28 10:04:08 crc kubenswrapper[4735]: E0128 10:04:08.066617 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c8edb4dd8fafb773393c97c22b4a0d6ebc58aa04024b271a6e016de3bdaf5b5\": container with ID starting with 8c8edb4dd8fafb773393c97c22b4a0d6ebc58aa04024b271a6e016de3bdaf5b5 not found: ID does not exist" containerID="8c8edb4dd8fafb773393c97c22b4a0d6ebc58aa04024b271a6e016de3bdaf5b5" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.066661 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c8edb4dd8fafb773393c97c22b4a0d6ebc58aa04024b271a6e016de3bdaf5b5"} err="failed to get container status \"8c8edb4dd8fafb773393c97c22b4a0d6ebc58aa04024b271a6e016de3bdaf5b5\": rpc error: code = NotFound desc = could not find container \"8c8edb4dd8fafb773393c97c22b4a0d6ebc58aa04024b271a6e016de3bdaf5b5\": container with ID starting with 8c8edb4dd8fafb773393c97c22b4a0d6ebc58aa04024b271a6e016de3bdaf5b5 not found: ID does not exist" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.066688 4735 scope.go:117] "RemoveContainer" containerID="edd8d69334f91bdee372fba13df8b221d8cce87c0244d29bf731a7005bb9938a" Jan 28 10:04:08 crc kubenswrapper[4735]: E0128 10:04:08.067387 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edd8d69334f91bdee372fba13df8b221d8cce87c0244d29bf731a7005bb9938a\": container with ID starting with edd8d69334f91bdee372fba13df8b221d8cce87c0244d29bf731a7005bb9938a not found: ID does not exist" containerID="edd8d69334f91bdee372fba13df8b221d8cce87c0244d29bf731a7005bb9938a" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.067418 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd8d69334f91bdee372fba13df8b221d8cce87c0244d29bf731a7005bb9938a"} err="failed to get container status \"edd8d69334f91bdee372fba13df8b221d8cce87c0244d29bf731a7005bb9938a\": rpc error: code = NotFound desc = could not find container \"edd8d69334f91bdee372fba13df8b221d8cce87c0244d29bf731a7005bb9938a\": container with ID starting with edd8d69334f91bdee372fba13df8b221d8cce87c0244d29bf731a7005bb9938a not found: ID does not exist" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.072568 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.865975659 podStartE2EDuration="7.072542579s" podCreationTimestamp="2026-01-28 10:04:01 +0000 UTC" firstStartedPulling="2026-01-28 10:04:02.795694485 +0000 UTC m=+1295.945358858" lastFinishedPulling="2026-01-28 10:04:07.002261405 +0000 UTC m=+1300.151925778" observedRunningTime="2026-01-28 10:04:08.03928114 +0000 UTC m=+1301.188945513" watchObservedRunningTime="2026-01-28 10:04:08.072542579 +0000 UTC m=+1301.222206942" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.077971 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a945053-f5d2-4d71-9e21-fdc3d7926265-kube-api-access-jl7f4" (OuterVolumeSpecName: "kube-api-access-jl7f4") pod "1a945053-f5d2-4d71-9e21-fdc3d7926265" (UID: "1a945053-f5d2-4d71-9e21-fdc3d7926265"). InnerVolumeSpecName "kube-api-access-jl7f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.087963 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-bzs7q" podStartSLOduration=2.08794148 podStartE2EDuration="2.08794148s" podCreationTimestamp="2026-01-28 10:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:04:08.069112768 +0000 UTC m=+1301.218777151" watchObservedRunningTime="2026-01-28 10:04:08.08794148 +0000 UTC m=+1301.237605853" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.131445 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1a945053-f5d2-4d71-9e21-fdc3d7926265" (UID: "1a945053-f5d2-4d71-9e21-fdc3d7926265"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.138100 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1a945053-f5d2-4d71-9e21-fdc3d7926265" (UID: "1a945053-f5d2-4d71-9e21-fdc3d7926265"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.143431 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1a945053-f5d2-4d71-9e21-fdc3d7926265" (UID: "1a945053-f5d2-4d71-9e21-fdc3d7926265"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.149998 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-config" (OuterVolumeSpecName: "config") pod "1a945053-f5d2-4d71-9e21-fdc3d7926265" (UID: "1a945053-f5d2-4d71-9e21-fdc3d7926265"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.165594 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.165641 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.165655 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.165669 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.165680 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl7f4\" (UniqueName: \"kubernetes.io/projected/1a945053-f5d2-4d71-9e21-fdc3d7926265-kube-api-access-jl7f4\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.169325 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1a945053-f5d2-4d71-9e21-fdc3d7926265" (UID: "1a945053-f5d2-4d71-9e21-fdc3d7926265"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.267140 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a945053-f5d2-4d71-9e21-fdc3d7926265-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.331216 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lqz84"] Jan 28 10:04:08 crc kubenswrapper[4735]: I0128 10:04:08.343626 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lqz84"] Jan 28 10:04:09 crc kubenswrapper[4735]: I0128 10:04:09.513229 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a945053-f5d2-4d71-9e21-fdc3d7926265" path="/var/lib/kubelet/pods/1a945053-f5d2-4d71-9e21-fdc3d7926265/volumes" Jan 28 10:04:12 crc kubenswrapper[4735]: I0128 10:04:12.039532 4735 generic.go:334] "Generic (PLEG): container finished" podID="fd5a59ad-84e0-4e0c-89a8-994b538ea4e1" containerID="2261bd57f64694e6e287a80d787d85735187d87a17b18b03b8acb5b80346e30c" exitCode=0 Jan 28 10:04:12 crc kubenswrapper[4735]: I0128 10:04:12.039607 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bzs7q" event={"ID":"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1","Type":"ContainerDied","Data":"2261bd57f64694e6e287a80d787d85735187d87a17b18b03b8acb5b80346e30c"} Jan 28 10:04:13 crc kubenswrapper[4735]: I0128 10:04:13.490795 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bzs7q" Jan 28 10:04:13 crc kubenswrapper[4735]: I0128 10:04:13.691108 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-config-data\") pod \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\" (UID: \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\") " Jan 28 10:04:13 crc kubenswrapper[4735]: I0128 10:04:13.691205 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-combined-ca-bundle\") pod \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\" (UID: \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\") " Jan 28 10:04:13 crc kubenswrapper[4735]: I0128 10:04:13.691267 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-scripts\") pod \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\" (UID: \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\") " Jan 28 10:04:13 crc kubenswrapper[4735]: I0128 10:04:13.691338 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr2tq\" (UniqueName: \"kubernetes.io/projected/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-kube-api-access-fr2tq\") pod \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\" (UID: \"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1\") " Jan 28 10:04:13 crc kubenswrapper[4735]: I0128 10:04:13.698101 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-scripts" (OuterVolumeSpecName: "scripts") pod "fd5a59ad-84e0-4e0c-89a8-994b538ea4e1" (UID: "fd5a59ad-84e0-4e0c-89a8-994b538ea4e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:13 crc kubenswrapper[4735]: I0128 10:04:13.698706 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-kube-api-access-fr2tq" (OuterVolumeSpecName: "kube-api-access-fr2tq") pod "fd5a59ad-84e0-4e0c-89a8-994b538ea4e1" (UID: "fd5a59ad-84e0-4e0c-89a8-994b538ea4e1"). InnerVolumeSpecName "kube-api-access-fr2tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:04:13 crc kubenswrapper[4735]: I0128 10:04:13.722836 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd5a59ad-84e0-4e0c-89a8-994b538ea4e1" (UID: "fd5a59ad-84e0-4e0c-89a8-994b538ea4e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:13 crc kubenswrapper[4735]: I0128 10:04:13.753722 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-config-data" (OuterVolumeSpecName: "config-data") pod "fd5a59ad-84e0-4e0c-89a8-994b538ea4e1" (UID: "fd5a59ad-84e0-4e0c-89a8-994b538ea4e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:13 crc kubenswrapper[4735]: I0128 10:04:13.794254 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:13 crc kubenswrapper[4735]: I0128 10:04:13.794295 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:13 crc kubenswrapper[4735]: I0128 10:04:13.794311 4735 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:13 crc kubenswrapper[4735]: I0128 10:04:13.794323 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr2tq\" (UniqueName: \"kubernetes.io/projected/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1-kube-api-access-fr2tq\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:14 crc kubenswrapper[4735]: I0128 10:04:14.066721 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bzs7q" event={"ID":"fd5a59ad-84e0-4e0c-89a8-994b538ea4e1","Type":"ContainerDied","Data":"c7106e2216aab8b2969b8aadefce61e46116557e09e70000953f081eeed82763"} Jan 28 10:04:14 crc kubenswrapper[4735]: I0128 10:04:14.066882 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7106e2216aab8b2969b8aadefce61e46116557e09e70000953f081eeed82763" Jan 28 10:04:14 crc kubenswrapper[4735]: I0128 10:04:14.066942 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bzs7q" Jan 28 10:04:14 crc kubenswrapper[4735]: E0128 10:04:14.121347 4735 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd5a59ad_84e0_4e0c_89a8_994b538ea4e1.slice/crio-c7106e2216aab8b2969b8aadefce61e46116557e09e70000953f081eeed82763\": RecentStats: unable to find data in memory cache]" Jan 28 10:04:14 crc kubenswrapper[4735]: I0128 10:04:14.231271 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 10:04:14 crc kubenswrapper[4735]: I0128 10:04:14.232642 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c24c57a1-0cee-4143-a2cd-f7151e519ca3" containerName="nova-api-api" containerID="cri-o://48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e" gracePeriod=30 Jan 28 10:04:14 crc kubenswrapper[4735]: I0128 10:04:14.232579 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c24c57a1-0cee-4143-a2cd-f7151e519ca3" containerName="nova-api-log" containerID="cri-o://0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0" gracePeriod=30 Jan 28 10:04:14 crc kubenswrapper[4735]: I0128 10:04:14.251037 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 10:04:14 crc kubenswrapper[4735]: I0128 10:04:14.251342 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2273503d-bfed-4918-ab93-40b634819284" containerName="nova-scheduler-scheduler" containerID="cri-o://5a6b603f340df67992e2184f565c84c3969362d0aec46fb97a7508adce6f613c" gracePeriod=30 Jan 28 10:04:14 crc kubenswrapper[4735]: I0128 10:04:14.305690 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 10:04:14 crc kubenswrapper[4735]: I0128 10:04:14.305920 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e700d210-fc95-45b8-a479-1fec0e726948" containerName="nova-metadata-log" containerID="cri-o://2908ef861c3bd4deb2c813735dbade7407cee383782ecb605cd8569523758a35" gracePeriod=30 Jan 28 10:04:14 crc kubenswrapper[4735]: I0128 10:04:14.305962 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e700d210-fc95-45b8-a479-1fec0e726948" containerName="nova-metadata-metadata" containerID="cri-o://78946d13963c4b196c375feb4110dbb4724dd5ab1b52f4830d20ef7adc289848" gracePeriod=30 Jan 28 10:04:14 crc kubenswrapper[4735]: I0128 10:04:14.969090 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.035336 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c24c57a1-0cee-4143-a2cd-f7151e519ca3-logs\") pod \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.035409 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-internal-tls-certs\") pod \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.035436 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-config-data\") pod \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.035460 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-public-tls-certs\") pod \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.035503 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-combined-ca-bundle\") pod \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.035561 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9d2w\" (UniqueName: \"kubernetes.io/projected/c24c57a1-0cee-4143-a2cd-f7151e519ca3-kube-api-access-n9d2w\") pod \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\" (UID: \"c24c57a1-0cee-4143-a2cd-f7151e519ca3\") " Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.039406 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c24c57a1-0cee-4143-a2cd-f7151e519ca3-logs" (OuterVolumeSpecName: "logs") pod "c24c57a1-0cee-4143-a2cd-f7151e519ca3" (UID: "c24c57a1-0cee-4143-a2cd-f7151e519ca3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.078811 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-config-data" (OuterVolumeSpecName: "config-data") pod "c24c57a1-0cee-4143-a2cd-f7151e519ca3" (UID: "c24c57a1-0cee-4143-a2cd-f7151e519ca3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.080440 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c24c57a1-0cee-4143-a2cd-f7151e519ca3-kube-api-access-n9d2w" (OuterVolumeSpecName: "kube-api-access-n9d2w") pod "c24c57a1-0cee-4143-a2cd-f7151e519ca3" (UID: "c24c57a1-0cee-4143-a2cd-f7151e519ca3"). InnerVolumeSpecName "kube-api-access-n9d2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.105990 4735 generic.go:334] "Generic (PLEG): container finished" podID="e700d210-fc95-45b8-a479-1fec0e726948" containerID="2908ef861c3bd4deb2c813735dbade7407cee383782ecb605cd8569523758a35" exitCode=143 Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.106723 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e700d210-fc95-45b8-a479-1fec0e726948","Type":"ContainerDied","Data":"2908ef861c3bd4deb2c813735dbade7407cee383782ecb605cd8569523758a35"} Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.108046 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c24c57a1-0cee-4143-a2cd-f7151e519ca3" (UID: "c24c57a1-0cee-4143-a2cd-f7151e519ca3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.108397 4735 generic.go:334] "Generic (PLEG): container finished" podID="c24c57a1-0cee-4143-a2cd-f7151e519ca3" containerID="48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e" exitCode=0 Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.108421 4735 generic.go:334] "Generic (PLEG): container finished" podID="c24c57a1-0cee-4143-a2cd-f7151e519ca3" containerID="0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0" exitCode=143 Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.108458 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c24c57a1-0cee-4143-a2cd-f7151e519ca3","Type":"ContainerDied","Data":"48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e"} Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.108474 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c24c57a1-0cee-4143-a2cd-f7151e519ca3","Type":"ContainerDied","Data":"0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0"} Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.108483 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c24c57a1-0cee-4143-a2cd-f7151e519ca3","Type":"ContainerDied","Data":"8f09a212414f339d01afcb88bc987daf3c7e1160a7142c7f890246d9c79fbded"} Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.108498 4735 scope.go:117] "RemoveContainer" containerID="48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.108510 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.110763 4735 generic.go:334] "Generic (PLEG): container finished" podID="2273503d-bfed-4918-ab93-40b634819284" containerID="5a6b603f340df67992e2184f565c84c3969362d0aec46fb97a7508adce6f613c" exitCode=0 Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.110802 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2273503d-bfed-4918-ab93-40b634819284","Type":"ContainerDied","Data":"5a6b603f340df67992e2184f565c84c3969362d0aec46fb97a7508adce6f613c"} Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.116461 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c24c57a1-0cee-4143-a2cd-f7151e519ca3" (UID: "c24c57a1-0cee-4143-a2cd-f7151e519ca3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.138325 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c24c57a1-0cee-4143-a2cd-f7151e519ca3-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.138392 4735 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.138402 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.138475 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.138487 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9d2w\" (UniqueName: \"kubernetes.io/projected/c24c57a1-0cee-4143-a2cd-f7151e519ca3-kube-api-access-n9d2w\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.149961 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c24c57a1-0cee-4143-a2cd-f7151e519ca3" (UID: "c24c57a1-0cee-4143-a2cd-f7151e519ca3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.154071 4735 scope.go:117] "RemoveContainer" containerID="0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.174331 4735 scope.go:117] "RemoveContainer" containerID="48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e" Jan 28 10:04:15 crc kubenswrapper[4735]: E0128 10:04:15.175232 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e\": container with ID starting with 48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e not found: ID does not exist" containerID="48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.175289 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e"} err="failed to get container status \"48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e\": rpc error: code = NotFound desc = could not find container \"48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e\": container with ID starting with 48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e not found: ID does not exist" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.175322 4735 scope.go:117] "RemoveContainer" containerID="0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0" Jan 28 10:04:15 crc kubenswrapper[4735]: E0128 10:04:15.175655 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0\": container with ID starting with 0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0 not found: ID does not exist" containerID="0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.175685 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0"} err="failed to get container status \"0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0\": rpc error: code = NotFound desc = could not find container \"0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0\": container with ID starting with 0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0 not found: ID does not exist" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.175707 4735 scope.go:117] "RemoveContainer" containerID="48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.176431 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e"} err="failed to get container status \"48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e\": rpc error: code = NotFound desc = could not find container \"48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e\": container with ID starting with 48e0eb013d3baada93f9c9f6ad37cb6dde7629291773c8f42680083bcaae810e not found: ID does not exist" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.176466 4735 scope.go:117] "RemoveContainer" containerID="0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.177527 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0"} err="failed to get container status \"0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0\": rpc error: code = NotFound desc = could not find container \"0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0\": container with ID starting with 0ef5763caabbc69af30d28c0e58872b0b787ad1be0206053305cc15eaa51dec0 not found: ID does not exist" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.188509 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.239758 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2273503d-bfed-4918-ab93-40b634819284-combined-ca-bundle\") pod \"2273503d-bfed-4918-ab93-40b634819284\" (UID: \"2273503d-bfed-4918-ab93-40b634819284\") " Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.239854 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldll5\" (UniqueName: \"kubernetes.io/projected/2273503d-bfed-4918-ab93-40b634819284-kube-api-access-ldll5\") pod \"2273503d-bfed-4918-ab93-40b634819284\" (UID: \"2273503d-bfed-4918-ab93-40b634819284\") " Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.239902 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2273503d-bfed-4918-ab93-40b634819284-config-data\") pod \"2273503d-bfed-4918-ab93-40b634819284\" (UID: \"2273503d-bfed-4918-ab93-40b634819284\") " Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.240231 4735 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c24c57a1-0cee-4143-a2cd-f7151e519ca3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.243606 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2273503d-bfed-4918-ab93-40b634819284-kube-api-access-ldll5" (OuterVolumeSpecName: "kube-api-access-ldll5") pod "2273503d-bfed-4918-ab93-40b634819284" (UID: "2273503d-bfed-4918-ab93-40b634819284"). InnerVolumeSpecName "kube-api-access-ldll5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.272772 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2273503d-bfed-4918-ab93-40b634819284-config-data" (OuterVolumeSpecName: "config-data") pod "2273503d-bfed-4918-ab93-40b634819284" (UID: "2273503d-bfed-4918-ab93-40b634819284"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.283917 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2273503d-bfed-4918-ab93-40b634819284-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2273503d-bfed-4918-ab93-40b634819284" (UID: "2273503d-bfed-4918-ab93-40b634819284"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.342019 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2273503d-bfed-4918-ab93-40b634819284-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.342058 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2273503d-bfed-4918-ab93-40b634819284-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.342077 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldll5\" (UniqueName: \"kubernetes.io/projected/2273503d-bfed-4918-ab93-40b634819284-kube-api-access-ldll5\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.457621 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.476177 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.488162 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 10:04:15 crc kubenswrapper[4735]: E0128 10:04:15.491175 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a945053-f5d2-4d71-9e21-fdc3d7926265" containerName="dnsmasq-dns" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.491366 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a945053-f5d2-4d71-9e21-fdc3d7926265" containerName="dnsmasq-dns" Jan 28 10:04:15 crc kubenswrapper[4735]: E0128 10:04:15.491451 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5a59ad-84e0-4e0c-89a8-994b538ea4e1" containerName="nova-manage" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.491535 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5a59ad-84e0-4e0c-89a8-994b538ea4e1" containerName="nova-manage" Jan 28 10:04:15 crc kubenswrapper[4735]: E0128 10:04:15.491640 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a945053-f5d2-4d71-9e21-fdc3d7926265" containerName="init" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.491714 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a945053-f5d2-4d71-9e21-fdc3d7926265" containerName="init" Jan 28 10:04:15 crc kubenswrapper[4735]: E0128 10:04:15.491914 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c24c57a1-0cee-4143-a2cd-f7151e519ca3" containerName="nova-api-api" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.491980 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c24c57a1-0cee-4143-a2cd-f7151e519ca3" containerName="nova-api-api" Jan 28 10:04:15 crc kubenswrapper[4735]: E0128 10:04:15.492059 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2273503d-bfed-4918-ab93-40b634819284" containerName="nova-scheduler-scheduler" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.492268 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2273503d-bfed-4918-ab93-40b634819284" containerName="nova-scheduler-scheduler" Jan 28 10:04:15 crc kubenswrapper[4735]: E0128 10:04:15.492352 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c24c57a1-0cee-4143-a2cd-f7151e519ca3" containerName="nova-api-log" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.492411 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="c24c57a1-0cee-4143-a2cd-f7151e519ca3" containerName="nova-api-log" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.492714 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2273503d-bfed-4918-ab93-40b634819284" containerName="nova-scheduler-scheduler" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.492840 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5a59ad-84e0-4e0c-89a8-994b538ea4e1" containerName="nova-manage" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.492939 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a945053-f5d2-4d71-9e21-fdc3d7926265" containerName="dnsmasq-dns" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.493116 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="c24c57a1-0cee-4143-a2cd-f7151e519ca3" containerName="nova-api-log" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.493196 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="c24c57a1-0cee-4143-a2cd-f7151e519ca3" containerName="nova-api-api" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.494436 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.496789 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.498329 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.499146 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.506999 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.521260 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c24c57a1-0cee-4143-a2cd-f7151e519ca3" path="/var/lib/kubelet/pods/c24c57a1-0cee-4143-a2cd-f7151e519ca3/volumes" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.545256 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6bcae3-d25c-4076-91f5-cb8979303b25-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.545339 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gckwr\" (UniqueName: \"kubernetes.io/projected/6c6bcae3-d25c-4076-91f5-cb8979303b25-kube-api-access-gckwr\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.545435 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6bcae3-d25c-4076-91f5-cb8979303b25-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.545462 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6bcae3-d25c-4076-91f5-cb8979303b25-config-data\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.545557 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c6bcae3-d25c-4076-91f5-cb8979303b25-logs\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.545636 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6bcae3-d25c-4076-91f5-cb8979303b25-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.648068 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6bcae3-d25c-4076-91f5-cb8979303b25-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.648114 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6bcae3-d25c-4076-91f5-cb8979303b25-config-data\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.648183 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c6bcae3-d25c-4076-91f5-cb8979303b25-logs\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.648229 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6bcae3-d25c-4076-91f5-cb8979303b25-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.648289 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6bcae3-d25c-4076-91f5-cb8979303b25-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.648324 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gckwr\" (UniqueName: \"kubernetes.io/projected/6c6bcae3-d25c-4076-91f5-cb8979303b25-kube-api-access-gckwr\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.649084 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c6bcae3-d25c-4076-91f5-cb8979303b25-logs\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.653596 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6bcae3-d25c-4076-91f5-cb8979303b25-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.653992 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6bcae3-d25c-4076-91f5-cb8979303b25-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.659587 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6bcae3-d25c-4076-91f5-cb8979303b25-config-data\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.660632 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6bcae3-d25c-4076-91f5-cb8979303b25-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.666821 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gckwr\" (UniqueName: \"kubernetes.io/projected/6c6bcae3-d25c-4076-91f5-cb8979303b25-kube-api-access-gckwr\") pod \"nova-api-0\" (UID: \"6c6bcae3-d25c-4076-91f5-cb8979303b25\") " pod="openstack/nova-api-0" Jan 28 10:04:15 crc kubenswrapper[4735]: I0128 10:04:15.846265 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.121354 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2273503d-bfed-4918-ab93-40b634819284","Type":"ContainerDied","Data":"23fc75ecf83a7064339886edd4f3799cf674dd90fe6e58401e2ccbe4f3e14ae5"} Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.121797 4735 scope.go:117] "RemoveContainer" containerID="5a6b603f340df67992e2184f565c84c3969362d0aec46fb97a7508adce6f613c" Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.121593 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.156962 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.179207 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.197099 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.198696 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.200516 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.205687 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.261165 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3554759-653b-4a68-99e7-7f6cddcd5e5a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f3554759-653b-4a68-99e7-7f6cddcd5e5a\") " pod="openstack/nova-scheduler-0" Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.261260 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhb4d\" (UniqueName: \"kubernetes.io/projected/f3554759-653b-4a68-99e7-7f6cddcd5e5a-kube-api-access-hhb4d\") pod \"nova-scheduler-0\" (UID: \"f3554759-653b-4a68-99e7-7f6cddcd5e5a\") " pod="openstack/nova-scheduler-0" Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.261334 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3554759-653b-4a68-99e7-7f6cddcd5e5a-config-data\") pod \"nova-scheduler-0\" (UID: \"f3554759-653b-4a68-99e7-7f6cddcd5e5a\") " pod="openstack/nova-scheduler-0" Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.345421 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 10:04:16 crc kubenswrapper[4735]: W0128 10:04:16.346837 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c6bcae3_d25c_4076_91f5_cb8979303b25.slice/crio-1a157d75f83896b7490c42b4b513090ab5302db0edad82b7ff4e32412803090e WatchSource:0}: Error finding container 1a157d75f83896b7490c42b4b513090ab5302db0edad82b7ff4e32412803090e: Status 404 returned error can't find the container with id 1a157d75f83896b7490c42b4b513090ab5302db0edad82b7ff4e32412803090e Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.362113 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhb4d\" (UniqueName: \"kubernetes.io/projected/f3554759-653b-4a68-99e7-7f6cddcd5e5a-kube-api-access-hhb4d\") pod \"nova-scheduler-0\" (UID: \"f3554759-653b-4a68-99e7-7f6cddcd5e5a\") " pod="openstack/nova-scheduler-0" Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.362229 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3554759-653b-4a68-99e7-7f6cddcd5e5a-config-data\") pod \"nova-scheduler-0\" (UID: \"f3554759-653b-4a68-99e7-7f6cddcd5e5a\") " pod="openstack/nova-scheduler-0" Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.362311 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3554759-653b-4a68-99e7-7f6cddcd5e5a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f3554759-653b-4a68-99e7-7f6cddcd5e5a\") " pod="openstack/nova-scheduler-0" Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.367100 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3554759-653b-4a68-99e7-7f6cddcd5e5a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f3554759-653b-4a68-99e7-7f6cddcd5e5a\") " pod="openstack/nova-scheduler-0" Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.367415 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3554759-653b-4a68-99e7-7f6cddcd5e5a-config-data\") pod \"nova-scheduler-0\" (UID: \"f3554759-653b-4a68-99e7-7f6cddcd5e5a\") " pod="openstack/nova-scheduler-0" Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.377985 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhb4d\" (UniqueName: \"kubernetes.io/projected/f3554759-653b-4a68-99e7-7f6cddcd5e5a-kube-api-access-hhb4d\") pod \"nova-scheduler-0\" (UID: \"f3554759-653b-4a68-99e7-7f6cddcd5e5a\") " pod="openstack/nova-scheduler-0" Jan 28 10:04:16 crc kubenswrapper[4735]: I0128 10:04:16.519283 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 10:04:17 crc kubenswrapper[4735]: I0128 10:04:17.135740 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c6bcae3-d25c-4076-91f5-cb8979303b25","Type":"ContainerStarted","Data":"5bfe3f75659b8b3db58f2499e237cbd8ab2a48da242667aa81a90a424d1d5951"} Jan 28 10:04:17 crc kubenswrapper[4735]: I0128 10:04:17.136381 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c6bcae3-d25c-4076-91f5-cb8979303b25","Type":"ContainerStarted","Data":"8e94f546e39da664dc804f4c43745bba405445df826b12c0257e16f8903c8567"} Jan 28 10:04:17 crc kubenswrapper[4735]: I0128 10:04:17.136397 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c6bcae3-d25c-4076-91f5-cb8979303b25","Type":"ContainerStarted","Data":"1a157d75f83896b7490c42b4b513090ab5302db0edad82b7ff4e32412803090e"} Jan 28 10:04:17 crc kubenswrapper[4735]: I0128 10:04:17.163987 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.163961877 podStartE2EDuration="2.163961877s" podCreationTimestamp="2026-01-28 10:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:04:17.159702227 +0000 UTC m=+1310.309366680" watchObservedRunningTime="2026-01-28 10:04:17.163961877 +0000 UTC m=+1310.313626250" Jan 28 10:04:17 crc kubenswrapper[4735]: I0128 10:04:17.532702 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2273503d-bfed-4918-ab93-40b634819284" path="/var/lib/kubelet/pods/2273503d-bfed-4918-ab93-40b634819284/volumes" Jan 28 10:04:17 crc kubenswrapper[4735]: I0128 10:04:17.537141 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 10:04:17 crc kubenswrapper[4735]: I0128 10:04:17.903023 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.094385 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-combined-ca-bundle\") pod \"e700d210-fc95-45b8-a479-1fec0e726948\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.094689 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e700d210-fc95-45b8-a479-1fec0e726948-logs\") pod \"e700d210-fc95-45b8-a479-1fec0e726948\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.095065 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7mcc\" (UniqueName: \"kubernetes.io/projected/e700d210-fc95-45b8-a479-1fec0e726948-kube-api-access-p7mcc\") pod \"e700d210-fc95-45b8-a479-1fec0e726948\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.095283 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-nova-metadata-tls-certs\") pod \"e700d210-fc95-45b8-a479-1fec0e726948\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.095565 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-config-data\") pod \"e700d210-fc95-45b8-a479-1fec0e726948\" (UID: \"e700d210-fc95-45b8-a479-1fec0e726948\") " Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.095585 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e700d210-fc95-45b8-a479-1fec0e726948-logs" (OuterVolumeSpecName: "logs") pod "e700d210-fc95-45b8-a479-1fec0e726948" (UID: "e700d210-fc95-45b8-a479-1fec0e726948"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.096633 4735 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e700d210-fc95-45b8-a479-1fec0e726948-logs\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.099903 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e700d210-fc95-45b8-a479-1fec0e726948-kube-api-access-p7mcc" (OuterVolumeSpecName: "kube-api-access-p7mcc") pod "e700d210-fc95-45b8-a479-1fec0e726948" (UID: "e700d210-fc95-45b8-a479-1fec0e726948"). InnerVolumeSpecName "kube-api-access-p7mcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.120282 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-config-data" (OuterVolumeSpecName: "config-data") pod "e700d210-fc95-45b8-a479-1fec0e726948" (UID: "e700d210-fc95-45b8-a479-1fec0e726948"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.122835 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e700d210-fc95-45b8-a479-1fec0e726948" (UID: "e700d210-fc95-45b8-a479-1fec0e726948"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.144333 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e700d210-fc95-45b8-a479-1fec0e726948" (UID: "e700d210-fc95-45b8-a479-1fec0e726948"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.150880 4735 generic.go:334] "Generic (PLEG): container finished" podID="e700d210-fc95-45b8-a479-1fec0e726948" containerID="78946d13963c4b196c375feb4110dbb4724dd5ab1b52f4830d20ef7adc289848" exitCode=0 Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.150944 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e700d210-fc95-45b8-a479-1fec0e726948","Type":"ContainerDied","Data":"78946d13963c4b196c375feb4110dbb4724dd5ab1b52f4830d20ef7adc289848"} Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.151030 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e700d210-fc95-45b8-a479-1fec0e726948","Type":"ContainerDied","Data":"ecbeb8541c9b245111a5a48565df64b1279cd7d39e80d515be6f1bba3404048b"} Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.151097 4735 scope.go:117] "RemoveContainer" containerID="78946d13963c4b196c375feb4110dbb4724dd5ab1b52f4830d20ef7adc289848" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.151376 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.155297 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f3554759-653b-4a68-99e7-7f6cddcd5e5a","Type":"ContainerStarted","Data":"ccc56dbe1d8e3d81145b248681d19419a67a5f6cdecb03c1299e0ca1d0b8290b"} Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.155348 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f3554759-653b-4a68-99e7-7f6cddcd5e5a","Type":"ContainerStarted","Data":"7b028975bf5029384729743629b1329aab7a3b89c16cb8e13dd57ee8a8014969"} Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.194108 4735 scope.go:117] "RemoveContainer" containerID="2908ef861c3bd4deb2c813735dbade7407cee383782ecb605cd8569523758a35" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.206084 4735 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.206523 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.206545 4735 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e700d210-fc95-45b8-a479-1fec0e726948-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.206562 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7mcc\" (UniqueName: \"kubernetes.io/projected/e700d210-fc95-45b8-a479-1fec0e726948-kube-api-access-p7mcc\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.213086 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.213051164 podStartE2EDuration="2.213051164s" podCreationTimestamp="2026-01-28 10:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:04:18.17655518 +0000 UTC m=+1311.326219583" watchObservedRunningTime="2026-01-28 10:04:18.213051164 +0000 UTC m=+1311.362715547" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.224263 4735 scope.go:117] "RemoveContainer" containerID="78946d13963c4b196c375feb4110dbb4724dd5ab1b52f4830d20ef7adc289848" Jan 28 10:04:18 crc kubenswrapper[4735]: E0128 10:04:18.230414 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78946d13963c4b196c375feb4110dbb4724dd5ab1b52f4830d20ef7adc289848\": container with ID starting with 78946d13963c4b196c375feb4110dbb4724dd5ab1b52f4830d20ef7adc289848 not found: ID does not exist" containerID="78946d13963c4b196c375feb4110dbb4724dd5ab1b52f4830d20ef7adc289848" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.230468 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78946d13963c4b196c375feb4110dbb4724dd5ab1b52f4830d20ef7adc289848"} err="failed to get container status \"78946d13963c4b196c375feb4110dbb4724dd5ab1b52f4830d20ef7adc289848\": rpc error: code = NotFound desc = could not find container \"78946d13963c4b196c375feb4110dbb4724dd5ab1b52f4830d20ef7adc289848\": container with ID starting with 78946d13963c4b196c375feb4110dbb4724dd5ab1b52f4830d20ef7adc289848 not found: ID does not exist" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.230498 4735 scope.go:117] "RemoveContainer" containerID="2908ef861c3bd4deb2c813735dbade7407cee383782ecb605cd8569523758a35" Jan 28 10:04:18 crc kubenswrapper[4735]: E0128 10:04:18.231019 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2908ef861c3bd4deb2c813735dbade7407cee383782ecb605cd8569523758a35\": container with ID starting with 2908ef861c3bd4deb2c813735dbade7407cee383782ecb605cd8569523758a35 not found: ID does not exist" containerID="2908ef861c3bd4deb2c813735dbade7407cee383782ecb605cd8569523758a35" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.231072 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2908ef861c3bd4deb2c813735dbade7407cee383782ecb605cd8569523758a35"} err="failed to get container status \"2908ef861c3bd4deb2c813735dbade7407cee383782ecb605cd8569523758a35\": rpc error: code = NotFound desc = could not find container \"2908ef861c3bd4deb2c813735dbade7407cee383782ecb605cd8569523758a35\": container with ID starting with 2908ef861c3bd4deb2c813735dbade7407cee383782ecb605cd8569523758a35 not found: ID does not exist" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.231441 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.250853 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.263766 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 10:04:18 crc kubenswrapper[4735]: E0128 10:04:18.264304 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e700d210-fc95-45b8-a479-1fec0e726948" containerName="nova-metadata-metadata" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.264332 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e700d210-fc95-45b8-a479-1fec0e726948" containerName="nova-metadata-metadata" Jan 28 10:04:18 crc kubenswrapper[4735]: E0128 10:04:18.264346 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e700d210-fc95-45b8-a479-1fec0e726948" containerName="nova-metadata-log" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.264355 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e700d210-fc95-45b8-a479-1fec0e726948" containerName="nova-metadata-log" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.264623 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e700d210-fc95-45b8-a479-1fec0e726948" containerName="nova-metadata-log" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.264659 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e700d210-fc95-45b8-a479-1fec0e726948" containerName="nova-metadata-metadata" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.265929 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.269186 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.269273 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.276011 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.311294 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nccfp\" (UniqueName: \"kubernetes.io/projected/2f9188ec-6ee4-4120-8c9e-ab22d7a57546-kube-api-access-nccfp\") pod \"nova-metadata-0\" (UID: \"2f9188ec-6ee4-4120-8c9e-ab22d7a57546\") " pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.311536 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9188ec-6ee4-4120-8c9e-ab22d7a57546-config-data\") pod \"nova-metadata-0\" (UID: \"2f9188ec-6ee4-4120-8c9e-ab22d7a57546\") " pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.311574 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9188ec-6ee4-4120-8c9e-ab22d7a57546-logs\") pod \"nova-metadata-0\" (UID: \"2f9188ec-6ee4-4120-8c9e-ab22d7a57546\") " pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.311717 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f9188ec-6ee4-4120-8c9e-ab22d7a57546-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2f9188ec-6ee4-4120-8c9e-ab22d7a57546\") " pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.311792 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9188ec-6ee4-4120-8c9e-ab22d7a57546-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2f9188ec-6ee4-4120-8c9e-ab22d7a57546\") " pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.413113 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nccfp\" (UniqueName: \"kubernetes.io/projected/2f9188ec-6ee4-4120-8c9e-ab22d7a57546-kube-api-access-nccfp\") pod \"nova-metadata-0\" (UID: \"2f9188ec-6ee4-4120-8c9e-ab22d7a57546\") " pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.413211 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9188ec-6ee4-4120-8c9e-ab22d7a57546-config-data\") pod \"nova-metadata-0\" (UID: \"2f9188ec-6ee4-4120-8c9e-ab22d7a57546\") " pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.413232 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9188ec-6ee4-4120-8c9e-ab22d7a57546-logs\") pod \"nova-metadata-0\" (UID: \"2f9188ec-6ee4-4120-8c9e-ab22d7a57546\") " pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.413280 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f9188ec-6ee4-4120-8c9e-ab22d7a57546-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2f9188ec-6ee4-4120-8c9e-ab22d7a57546\") " pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.413302 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9188ec-6ee4-4120-8c9e-ab22d7a57546-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2f9188ec-6ee4-4120-8c9e-ab22d7a57546\") " pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.413929 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9188ec-6ee4-4120-8c9e-ab22d7a57546-logs\") pod \"nova-metadata-0\" (UID: \"2f9188ec-6ee4-4120-8c9e-ab22d7a57546\") " pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.417800 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f9188ec-6ee4-4120-8c9e-ab22d7a57546-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2f9188ec-6ee4-4120-8c9e-ab22d7a57546\") " pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.417942 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9188ec-6ee4-4120-8c9e-ab22d7a57546-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2f9188ec-6ee4-4120-8c9e-ab22d7a57546\") " pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.418876 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9188ec-6ee4-4120-8c9e-ab22d7a57546-config-data\") pod \"nova-metadata-0\" (UID: \"2f9188ec-6ee4-4120-8c9e-ab22d7a57546\") " pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.447119 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nccfp\" (UniqueName: \"kubernetes.io/projected/2f9188ec-6ee4-4120-8c9e-ab22d7a57546-kube-api-access-nccfp\") pod \"nova-metadata-0\" (UID: \"2f9188ec-6ee4-4120-8c9e-ab22d7a57546\") " pod="openstack/nova-metadata-0" Jan 28 10:04:18 crc kubenswrapper[4735]: I0128 10:04:18.610506 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 10:04:19 crc kubenswrapper[4735]: W0128 10:04:19.074541 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f9188ec_6ee4_4120_8c9e_ab22d7a57546.slice/crio-235df6627819a0853978108aa5be85f5e3035a662c68d33d4e98578d0fbd1e42 WatchSource:0}: Error finding container 235df6627819a0853978108aa5be85f5e3035a662c68d33d4e98578d0fbd1e42: Status 404 returned error can't find the container with id 235df6627819a0853978108aa5be85f5e3035a662c68d33d4e98578d0fbd1e42 Jan 28 10:04:19 crc kubenswrapper[4735]: I0128 10:04:19.080803 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 10:04:19 crc kubenswrapper[4735]: I0128 10:04:19.169090 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f9188ec-6ee4-4120-8c9e-ab22d7a57546","Type":"ContainerStarted","Data":"235df6627819a0853978108aa5be85f5e3035a662c68d33d4e98578d0fbd1e42"} Jan 28 10:04:19 crc kubenswrapper[4735]: I0128 10:04:19.516141 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e700d210-fc95-45b8-a479-1fec0e726948" path="/var/lib/kubelet/pods/e700d210-fc95-45b8-a479-1fec0e726948/volumes" Jan 28 10:04:20 crc kubenswrapper[4735]: I0128 10:04:20.183204 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f9188ec-6ee4-4120-8c9e-ab22d7a57546","Type":"ContainerStarted","Data":"a3a815d059a49eff00a6d19689fdec77864b8ece2f3bd5c555a4bd78c5088aed"} Jan 28 10:04:20 crc kubenswrapper[4735]: I0128 10:04:20.183475 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f9188ec-6ee4-4120-8c9e-ab22d7a57546","Type":"ContainerStarted","Data":"6c0ef91ef1bacd8ae8873bcaf1da10564d39234998cc2300b97a84e265cdb359"} Jan 28 10:04:20 crc kubenswrapper[4735]: I0128 10:04:20.211877 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.211857194 podStartE2EDuration="2.211857194s" podCreationTimestamp="2026-01-28 10:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:04:20.206042348 +0000 UTC m=+1313.355706721" watchObservedRunningTime="2026-01-28 10:04:20.211857194 +0000 UTC m=+1313.361521567" Jan 28 10:04:21 crc kubenswrapper[4735]: I0128 10:04:21.520653 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 10:04:23 crc kubenswrapper[4735]: I0128 10:04:23.611631 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 10:04:23 crc kubenswrapper[4735]: I0128 10:04:23.612017 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 10:04:25 crc kubenswrapper[4735]: I0128 10:04:25.847105 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 10:04:25 crc kubenswrapper[4735]: I0128 10:04:25.847509 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 10:04:26 crc kubenswrapper[4735]: I0128 10:04:26.520549 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 10:04:26 crc kubenswrapper[4735]: I0128 10:04:26.553987 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 10:04:26 crc kubenswrapper[4735]: I0128 10:04:26.861050 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6c6bcae3-d25c-4076-91f5-cb8979303b25" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 10:04:26 crc kubenswrapper[4735]: I0128 10:04:26.861045 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6c6bcae3-d25c-4076-91f5-cb8979303b25" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 10:04:27 crc kubenswrapper[4735]: I0128 10:04:27.282601 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 10:04:28 crc kubenswrapper[4735]: I0128 10:04:28.612460 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 10:04:28 crc kubenswrapper[4735]: I0128 10:04:28.613455 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 10:04:29 crc kubenswrapper[4735]: I0128 10:04:29.626952 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2f9188ec-6ee4-4120-8c9e-ab22d7a57546" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 10:04:29 crc kubenswrapper[4735]: I0128 10:04:29.626990 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2f9188ec-6ee4-4120-8c9e-ab22d7a57546" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 10:04:32 crc kubenswrapper[4735]: I0128 10:04:32.300793 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 10:04:35 crc kubenswrapper[4735]: I0128 10:04:35.429491 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:04:35 crc kubenswrapper[4735]: I0128 10:04:35.430109 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:04:35 crc kubenswrapper[4735]: I0128 10:04:35.854328 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 10:04:35 crc kubenswrapper[4735]: I0128 10:04:35.854863 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 10:04:35 crc kubenswrapper[4735]: I0128 10:04:35.855075 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 10:04:35 crc kubenswrapper[4735]: I0128 10:04:35.860563 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 10:04:36 crc kubenswrapper[4735]: I0128 10:04:36.334709 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 10:04:36 crc kubenswrapper[4735]: I0128 10:04:36.342742 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 10:04:38 crc kubenswrapper[4735]: I0128 10:04:38.618027 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 10:04:38 crc kubenswrapper[4735]: I0128 10:04:38.622647 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 10:04:38 crc kubenswrapper[4735]: I0128 10:04:38.624935 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 10:04:39 crc kubenswrapper[4735]: I0128 10:04:39.372902 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 10:04:47 crc kubenswrapper[4735]: I0128 10:04:47.575000 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 10:04:49 crc kubenswrapper[4735]: I0128 10:04:49.205567 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 10:04:51 crc kubenswrapper[4735]: I0128 10:04:51.540123 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="348630cf-4549-4b7a-8784-9b1640f64c39" containerName="rabbitmq" containerID="cri-o://19c87e42426e4aab7dc47a874d6b63968d18d5fa84f1d4ee9e799491bfb458ea" gracePeriod=604797 Jan 28 10:04:52 crc kubenswrapper[4735]: I0128 10:04:52.936707 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" containerName="rabbitmq" containerID="cri-o://dc8885aabf041d87ed330c564d9ab0c0cb91f584fb1e34cfc9dd1bdf9720a910" gracePeriod=604797 Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.114426 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.243653 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-confd\") pod \"348630cf-4549-4b7a-8784-9b1640f64c39\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.243698 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-config-data\") pod \"348630cf-4549-4b7a-8784-9b1640f64c39\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.243723 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-server-conf\") pod \"348630cf-4549-4b7a-8784-9b1640f64c39\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.243832 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4jxk\" (UniqueName: \"kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-kube-api-access-x4jxk\") pod \"348630cf-4549-4b7a-8784-9b1640f64c39\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.243864 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/348630cf-4549-4b7a-8784-9b1640f64c39-pod-info\") pod \"348630cf-4549-4b7a-8784-9b1640f64c39\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.243982 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/348630cf-4549-4b7a-8784-9b1640f64c39-erlang-cookie-secret\") pod \"348630cf-4549-4b7a-8784-9b1640f64c39\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.244016 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-plugins-conf\") pod \"348630cf-4549-4b7a-8784-9b1640f64c39\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.244062 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"348630cf-4549-4b7a-8784-9b1640f64c39\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.244156 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-tls\") pod \"348630cf-4549-4b7a-8784-9b1640f64c39\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.244203 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-erlang-cookie\") pod \"348630cf-4549-4b7a-8784-9b1640f64c39\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.244235 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-plugins\") pod \"348630cf-4549-4b7a-8784-9b1640f64c39\" (UID: \"348630cf-4549-4b7a-8784-9b1640f64c39\") " Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.244598 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "348630cf-4549-4b7a-8784-9b1640f64c39" (UID: "348630cf-4549-4b7a-8784-9b1640f64c39"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.244781 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "348630cf-4549-4b7a-8784-9b1640f64c39" (UID: "348630cf-4549-4b7a-8784-9b1640f64c39"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.244944 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "348630cf-4549-4b7a-8784-9b1640f64c39" (UID: "348630cf-4549-4b7a-8784-9b1640f64c39"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.245176 4735 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.245191 4735 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.245201 4735 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.248887 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "348630cf-4549-4b7a-8784-9b1640f64c39" (UID: "348630cf-4549-4b7a-8784-9b1640f64c39"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.252030 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/348630cf-4549-4b7a-8784-9b1640f64c39-pod-info" (OuterVolumeSpecName: "pod-info") pod "348630cf-4549-4b7a-8784-9b1640f64c39" (UID: "348630cf-4549-4b7a-8784-9b1640f64c39"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.253135 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "348630cf-4549-4b7a-8784-9b1640f64c39" (UID: "348630cf-4549-4b7a-8784-9b1640f64c39"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.254221 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-kube-api-access-x4jxk" (OuterVolumeSpecName: "kube-api-access-x4jxk") pod "348630cf-4549-4b7a-8784-9b1640f64c39" (UID: "348630cf-4549-4b7a-8784-9b1640f64c39"). InnerVolumeSpecName "kube-api-access-x4jxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.263955 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/348630cf-4549-4b7a-8784-9b1640f64c39-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "348630cf-4549-4b7a-8784-9b1640f64c39" (UID: "348630cf-4549-4b7a-8784-9b1640f64c39"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.296042 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-config-data" (OuterVolumeSpecName: "config-data") pod "348630cf-4549-4b7a-8784-9b1640f64c39" (UID: "348630cf-4549-4b7a-8784-9b1640f64c39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.304065 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-server-conf" (OuterVolumeSpecName: "server-conf") pod "348630cf-4549-4b7a-8784-9b1640f64c39" (UID: "348630cf-4549-4b7a-8784-9b1640f64c39"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.347056 4735 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.347094 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.347102 4735 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/348630cf-4549-4b7a-8784-9b1640f64c39-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.347111 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4jxk\" (UniqueName: \"kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-kube-api-access-x4jxk\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.347122 4735 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/348630cf-4549-4b7a-8784-9b1640f64c39-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.347130 4735 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/348630cf-4549-4b7a-8784-9b1640f64c39-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.347157 4735 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.370140 4735 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.377611 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "348630cf-4549-4b7a-8784-9b1640f64c39" (UID: "348630cf-4549-4b7a-8784-9b1640f64c39"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.448397 4735 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/348630cf-4549-4b7a-8784-9b1640f64c39-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.448440 4735 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.565198 4735 generic.go:334] "Generic (PLEG): container finished" podID="348630cf-4549-4b7a-8784-9b1640f64c39" containerID="19c87e42426e4aab7dc47a874d6b63968d18d5fa84f1d4ee9e799491bfb458ea" exitCode=0 Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.565250 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.565250 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"348630cf-4549-4b7a-8784-9b1640f64c39","Type":"ContainerDied","Data":"19c87e42426e4aab7dc47a874d6b63968d18d5fa84f1d4ee9e799491bfb458ea"} Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.565396 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"348630cf-4549-4b7a-8784-9b1640f64c39","Type":"ContainerDied","Data":"3839075998c0b24ff149998d883d9d9d471742b2ebaa79cb0e0f90e76ed3391c"} Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.565441 4735 scope.go:117] "RemoveContainer" containerID="19c87e42426e4aab7dc47a874d6b63968d18d5fa84f1d4ee9e799491bfb458ea" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.601436 4735 scope.go:117] "RemoveContainer" containerID="074772d4c557adbf7d1aa70279d0557705aeb23f4edcefca4e26366446086898" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.611088 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.620138 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.654010 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 10:04:58 crc kubenswrapper[4735]: E0128 10:04:58.655174 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348630cf-4549-4b7a-8784-9b1640f64c39" containerName="setup-container" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.655203 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="348630cf-4549-4b7a-8784-9b1640f64c39" containerName="setup-container" Jan 28 10:04:58 crc kubenswrapper[4735]: E0128 10:04:58.655231 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348630cf-4549-4b7a-8784-9b1640f64c39" containerName="rabbitmq" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.655239 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="348630cf-4549-4b7a-8784-9b1640f64c39" containerName="rabbitmq" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.655601 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="348630cf-4549-4b7a-8784-9b1640f64c39" containerName="rabbitmq" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.656931 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.658981 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.659003 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.659027 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.659042 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.659154 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.659160 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zfw4x" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.660808 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.661015 4735 scope.go:117] "RemoveContainer" containerID="19c87e42426e4aab7dc47a874d6b63968d18d5fa84f1d4ee9e799491bfb458ea" Jan 28 10:04:58 crc kubenswrapper[4735]: E0128 10:04:58.664590 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19c87e42426e4aab7dc47a874d6b63968d18d5fa84f1d4ee9e799491bfb458ea\": container with ID starting with 19c87e42426e4aab7dc47a874d6b63968d18d5fa84f1d4ee9e799491bfb458ea not found: ID does not exist" containerID="19c87e42426e4aab7dc47a874d6b63968d18d5fa84f1d4ee9e799491bfb458ea" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.664650 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c87e42426e4aab7dc47a874d6b63968d18d5fa84f1d4ee9e799491bfb458ea"} err="failed to get container status \"19c87e42426e4aab7dc47a874d6b63968d18d5fa84f1d4ee9e799491bfb458ea\": rpc error: code = NotFound desc = could not find container \"19c87e42426e4aab7dc47a874d6b63968d18d5fa84f1d4ee9e799491bfb458ea\": container with ID starting with 19c87e42426e4aab7dc47a874d6b63968d18d5fa84f1d4ee9e799491bfb458ea not found: ID does not exist" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.664685 4735 scope.go:117] "RemoveContainer" containerID="074772d4c557adbf7d1aa70279d0557705aeb23f4edcefca4e26366446086898" Jan 28 10:04:58 crc kubenswrapper[4735]: E0128 10:04:58.665182 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"074772d4c557adbf7d1aa70279d0557705aeb23f4edcefca4e26366446086898\": container with ID starting with 074772d4c557adbf7d1aa70279d0557705aeb23f4edcefca4e26366446086898 not found: ID does not exist" containerID="074772d4c557adbf7d1aa70279d0557705aeb23f4edcefca4e26366446086898" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.665229 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"074772d4c557adbf7d1aa70279d0557705aeb23f4edcefca4e26366446086898"} err="failed to get container status \"074772d4c557adbf7d1aa70279d0557705aeb23f4edcefca4e26366446086898\": rpc error: code = NotFound desc = could not find container \"074772d4c557adbf7d1aa70279d0557705aeb23f4edcefca4e26366446086898\": container with ID starting with 074772d4c557adbf7d1aa70279d0557705aeb23f4edcefca4e26366446086898 not found: ID does not exist" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.690005 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.738156 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-vcd9g"] Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.739575 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.741960 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.767500 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-vcd9g"] Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855031 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bbefc8b9-3deb-4ae1-a798-1e95f1014380-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855099 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855140 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jcq9\" (UniqueName: \"kubernetes.io/projected/bbefc8b9-3deb-4ae1-a798-1e95f1014380-kube-api-access-4jcq9\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855170 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bbefc8b9-3deb-4ae1-a798-1e95f1014380-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855199 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bbefc8b9-3deb-4ae1-a798-1e95f1014380-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855229 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855326 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bbefc8b9-3deb-4ae1-a798-1e95f1014380-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855350 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bbefc8b9-3deb-4ae1-a798-1e95f1014380-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855374 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bbefc8b9-3deb-4ae1-a798-1e95f1014380-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855392 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbefc8b9-3deb-4ae1-a798-1e95f1014380-config-data\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855408 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bbefc8b9-3deb-4ae1-a798-1e95f1014380-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855502 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855559 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bbefc8b9-3deb-4ae1-a798-1e95f1014380-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855583 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855613 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-config\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855640 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-dns-svc\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855795 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.855849 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srjv2\" (UniqueName: \"kubernetes.io/projected/fab7048f-5f5e-4599-b853-ab66e03cae89-kube-api-access-srjv2\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.966154 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbefc8b9-3deb-4ae1-a798-1e95f1014380-config-data\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.966199 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bbefc8b9-3deb-4ae1-a798-1e95f1014380-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.966248 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.966303 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bbefc8b9-3deb-4ae1-a798-1e95f1014380-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.966341 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.966376 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-config\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.966411 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-dns-svc\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.966488 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.966538 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srjv2\" (UniqueName: \"kubernetes.io/projected/fab7048f-5f5e-4599-b853-ab66e03cae89-kube-api-access-srjv2\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.966629 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bbefc8b9-3deb-4ae1-a798-1e95f1014380-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.966695 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.966948 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jcq9\" (UniqueName: \"kubernetes.io/projected/bbefc8b9-3deb-4ae1-a798-1e95f1014380-kube-api-access-4jcq9\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.967005 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bbefc8b9-3deb-4ae1-a798-1e95f1014380-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.967043 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bbefc8b9-3deb-4ae1-a798-1e95f1014380-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.967080 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.967172 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bbefc8b9-3deb-4ae1-a798-1e95f1014380-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.967197 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bbefc8b9-3deb-4ae1-a798-1e95f1014380-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.967228 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bbefc8b9-3deb-4ae1-a798-1e95f1014380-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.968466 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.968500 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bbefc8b9-3deb-4ae1-a798-1e95f1014380-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.969421 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-config\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.969488 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.969733 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbefc8b9-3deb-4ae1-a798-1e95f1014380-config-data\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.970334 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-dns-svc\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.970995 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.971593 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bbefc8b9-3deb-4ae1-a798-1e95f1014380-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.972001 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.981046 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bbefc8b9-3deb-4ae1-a798-1e95f1014380-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.984366 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bbefc8b9-3deb-4ae1-a798-1e95f1014380-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.984476 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bbefc8b9-3deb-4ae1-a798-1e95f1014380-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.985507 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bbefc8b9-3deb-4ae1-a798-1e95f1014380-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.985666 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bbefc8b9-3deb-4ae1-a798-1e95f1014380-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.987678 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:58 crc kubenswrapper[4735]: I0128 10:04:58.998328 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bbefc8b9-3deb-4ae1-a798-1e95f1014380-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.000238 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srjv2\" (UniqueName: \"kubernetes.io/projected/fab7048f-5f5e-4599-b853-ab66e03cae89-kube-api-access-srjv2\") pod \"dnsmasq-dns-67b789f86c-vcd9g\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.006778 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.023525 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jcq9\" (UniqueName: \"kubernetes.io/projected/bbefc8b9-3deb-4ae1-a798-1e95f1014380-kube-api-access-4jcq9\") pod \"rabbitmq-server-0\" (UID: \"bbefc8b9-3deb-4ae1-a798-1e95f1014380\") " pod="openstack/rabbitmq-server-0" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.112822 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.293191 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.506395 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.527845 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="348630cf-4549-4b7a-8784-9b1640f64c39" path="/var/lib/kubelet/pods/348630cf-4549-4b7a-8784-9b1640f64c39/volumes" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.589948 4735 generic.go:334] "Generic (PLEG): container finished" podID="bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" containerID="dc8885aabf041d87ed330c564d9ab0c0cb91f584fb1e34cfc9dd1bdf9720a910" exitCode=0 Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.589999 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1","Type":"ContainerDied","Data":"dc8885aabf041d87ed330c564d9ab0c0cb91f584fb1e34cfc9dd1bdf9720a910"} Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.590029 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1","Type":"ContainerDied","Data":"e9f7a0b45360c6f1a0e991bda107429d76aa13104521ed59d2905afb685b91f1"} Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.590050 4735 scope.go:117] "RemoveContainer" containerID="dc8885aabf041d87ed330c564d9ab0c0cb91f584fb1e34cfc9dd1bdf9720a910" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.590208 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.607921 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-vcd9g"] Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.619975 4735 scope.go:117] "RemoveContainer" containerID="eeb38ab05853e87edab775732f8845a1e0767277fe834036f74ece1c3d698a90" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.645384 4735 scope.go:117] "RemoveContainer" containerID="dc8885aabf041d87ed330c564d9ab0c0cb91f584fb1e34cfc9dd1bdf9720a910" Jan 28 10:04:59 crc kubenswrapper[4735]: E0128 10:04:59.645908 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc8885aabf041d87ed330c564d9ab0c0cb91f584fb1e34cfc9dd1bdf9720a910\": container with ID starting with dc8885aabf041d87ed330c564d9ab0c0cb91f584fb1e34cfc9dd1bdf9720a910 not found: ID does not exist" containerID="dc8885aabf041d87ed330c564d9ab0c0cb91f584fb1e34cfc9dd1bdf9720a910" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.645939 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc8885aabf041d87ed330c564d9ab0c0cb91f584fb1e34cfc9dd1bdf9720a910"} err="failed to get container status \"dc8885aabf041d87ed330c564d9ab0c0cb91f584fb1e34cfc9dd1bdf9720a910\": rpc error: code = NotFound desc = could not find container \"dc8885aabf041d87ed330c564d9ab0c0cb91f584fb1e34cfc9dd1bdf9720a910\": container with ID starting with dc8885aabf041d87ed330c564d9ab0c0cb91f584fb1e34cfc9dd1bdf9720a910 not found: ID does not exist" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.645960 4735 scope.go:117] "RemoveContainer" containerID="eeb38ab05853e87edab775732f8845a1e0767277fe834036f74ece1c3d698a90" Jan 28 10:04:59 crc kubenswrapper[4735]: E0128 10:04:59.646290 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeb38ab05853e87edab775732f8845a1e0767277fe834036f74ece1c3d698a90\": container with ID starting with eeb38ab05853e87edab775732f8845a1e0767277fe834036f74ece1c3d698a90 not found: ID does not exist" containerID="eeb38ab05853e87edab775732f8845a1e0767277fe834036f74ece1c3d698a90" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.646339 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeb38ab05853e87edab775732f8845a1e0767277fe834036f74ece1c3d698a90"} err="failed to get container status \"eeb38ab05853e87edab775732f8845a1e0767277fe834036f74ece1c3d698a90\": rpc error: code = NotFound desc = could not find container \"eeb38ab05853e87edab775732f8845a1e0767277fe834036f74ece1c3d698a90\": container with ID starting with eeb38ab05853e87edab775732f8845a1e0767277fe834036f74ece1c3d698a90 not found: ID does not exist" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.679896 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-plugins\") pod \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.679950 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-config-data\") pod \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.679998 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.680024 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-plugins-conf\") pod \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.680079 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-pod-info\") pod \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.680111 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-erlang-cookie-secret\") pod \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.680134 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-tls\") pod \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.680161 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-erlang-cookie\") pod \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.680197 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-confd\") pod \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.680288 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbzvf\" (UniqueName: \"kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-kube-api-access-lbzvf\") pod \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.680343 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-server-conf\") pod \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\" (UID: \"bb11b48d-f557-4ee4-aa1a-c23e68f65fd1\") " Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.682113 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" (UID: "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.682124 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" (UID: "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.682697 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" (UID: "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.686192 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" (UID: "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.687122 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-kube-api-access-lbzvf" (OuterVolumeSpecName: "kube-api-access-lbzvf") pod "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" (UID: "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1"). InnerVolumeSpecName "kube-api-access-lbzvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.687865 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" (UID: "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.687905 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-pod-info" (OuterVolumeSpecName: "pod-info") pod "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" (UID: "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.689291 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" (UID: "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.717613 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-config-data" (OuterVolumeSpecName: "config-data") pod "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" (UID: "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.777731 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-server-conf" (OuterVolumeSpecName: "server-conf") pod "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" (UID: "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.782334 4735 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.782369 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.782416 4735 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.782430 4735 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.782441 4735 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.782452 4735 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.782464 4735 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.782476 4735 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.782490 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbzvf\" (UniqueName: \"kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-kube-api-access-lbzvf\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.782500 4735 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.813894 4735 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.816218 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 10:04:59 crc kubenswrapper[4735]: W0128 10:04:59.820594 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbefc8b9_3deb_4ae1_a798_1e95f1014380.slice/crio-65f14a21859e6447aae7e1c54ed6d7c9d6dd4aa29dc64d9dc2f2491c6ed29dbc WatchSource:0}: Error finding container 65f14a21859e6447aae7e1c54ed6d7c9d6dd4aa29dc64d9dc2f2491c6ed29dbc: Status 404 returned error can't find the container with id 65f14a21859e6447aae7e1c54ed6d7c9d6dd4aa29dc64d9dc2f2491c6ed29dbc Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.830179 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" (UID: "bb11b48d-f557-4ee4-aa1a-c23e68f65fd1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.884247 4735 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.884292 4735 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.974606 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 10:04:59 crc kubenswrapper[4735]: I0128 10:04:59.994409 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.025813 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 10:05:00 crc kubenswrapper[4735]: E0128 10:05:00.026192 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" containerName="rabbitmq" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.026211 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" containerName="rabbitmq" Jan 28 10:05:00 crc kubenswrapper[4735]: E0128 10:05:00.026225 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" containerName="setup-container" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.026231 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" containerName="setup-container" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.026397 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" containerName="rabbitmq" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.027300 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.038574 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.039446 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9j4gv" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.039594 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.039803 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.040073 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.040965 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.041201 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.058712 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.193595 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5514390d-3d1c-45af-b063-0ba631c09e43-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.193707 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5514390d-3d1c-45af-b063-0ba631c09e43-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.193773 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5514390d-3d1c-45af-b063-0ba631c09e43-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.193808 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkhxt\" (UniqueName: \"kubernetes.io/projected/5514390d-3d1c-45af-b063-0ba631c09e43-kube-api-access-nkhxt\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.193867 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5514390d-3d1c-45af-b063-0ba631c09e43-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.193899 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5514390d-3d1c-45af-b063-0ba631c09e43-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.194038 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5514390d-3d1c-45af-b063-0ba631c09e43-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.194130 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5514390d-3d1c-45af-b063-0ba631c09e43-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.194209 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5514390d-3d1c-45af-b063-0ba631c09e43-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.194245 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.194391 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5514390d-3d1c-45af-b063-0ba631c09e43-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.296178 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5514390d-3d1c-45af-b063-0ba631c09e43-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.297204 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5514390d-3d1c-45af-b063-0ba631c09e43-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.297258 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5514390d-3d1c-45af-b063-0ba631c09e43-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.297293 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkhxt\" (UniqueName: \"kubernetes.io/projected/5514390d-3d1c-45af-b063-0ba631c09e43-kube-api-access-nkhxt\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.297353 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5514390d-3d1c-45af-b063-0ba631c09e43-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.297395 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5514390d-3d1c-45af-b063-0ba631c09e43-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.297461 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5514390d-3d1c-45af-b063-0ba631c09e43-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.297677 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5514390d-3d1c-45af-b063-0ba631c09e43-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.297732 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5514390d-3d1c-45af-b063-0ba631c09e43-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.297781 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.297862 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5514390d-3d1c-45af-b063-0ba631c09e43-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.298143 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5514390d-3d1c-45af-b063-0ba631c09e43-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.298261 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.298484 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5514390d-3d1c-45af-b063-0ba631c09e43-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.298780 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5514390d-3d1c-45af-b063-0ba631c09e43-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.299136 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5514390d-3d1c-45af-b063-0ba631c09e43-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.300061 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5514390d-3d1c-45af-b063-0ba631c09e43-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.306269 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5514390d-3d1c-45af-b063-0ba631c09e43-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.306336 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5514390d-3d1c-45af-b063-0ba631c09e43-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.306618 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5514390d-3d1c-45af-b063-0ba631c09e43-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.317304 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkhxt\" (UniqueName: \"kubernetes.io/projected/5514390d-3d1c-45af-b063-0ba631c09e43-kube-api-access-nkhxt\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.319195 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5514390d-3d1c-45af-b063-0ba631c09e43-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.327641 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5514390d-3d1c-45af-b063-0ba631c09e43\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.359614 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.601011 4735 generic.go:334] "Generic (PLEG): container finished" podID="fab7048f-5f5e-4599-b853-ab66e03cae89" containerID="895de2f74a63c1d95cc1bcf3de3529d2f8e8b831be9236a5c80dc9d8ef435faa" exitCode=0 Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.601085 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" event={"ID":"fab7048f-5f5e-4599-b853-ab66e03cae89","Type":"ContainerDied","Data":"895de2f74a63c1d95cc1bcf3de3529d2f8e8b831be9236a5c80dc9d8ef435faa"} Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.602071 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" event={"ID":"fab7048f-5f5e-4599-b853-ab66e03cae89","Type":"ContainerStarted","Data":"146f4bf401cf54cb5516dc196a3c49071f7eec42bfb9a503eafc31e96d4f3f7f"} Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.606538 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bbefc8b9-3deb-4ae1-a798-1e95f1014380","Type":"ContainerStarted","Data":"65f14a21859e6447aae7e1c54ed6d7c9d6dd4aa29dc64d9dc2f2491c6ed29dbc"} Jan 28 10:05:00 crc kubenswrapper[4735]: W0128 10:05:00.795027 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5514390d_3d1c_45af_b063_0ba631c09e43.slice/crio-4303f4f27c31584baad605e37b0ea6a325ed1686d9682984dff083ed06151100 WatchSource:0}: Error finding container 4303f4f27c31584baad605e37b0ea6a325ed1686d9682984dff083ed06151100: Status 404 returned error can't find the container with id 4303f4f27c31584baad605e37b0ea6a325ed1686d9682984dff083ed06151100 Jan 28 10:05:00 crc kubenswrapper[4735]: I0128 10:05:00.796182 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 10:05:01 crc kubenswrapper[4735]: I0128 10:05:01.523884 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb11b48d-f557-4ee4-aa1a-c23e68f65fd1" path="/var/lib/kubelet/pods/bb11b48d-f557-4ee4-aa1a-c23e68f65fd1/volumes" Jan 28 10:05:01 crc kubenswrapper[4735]: I0128 10:05:01.617692 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" event={"ID":"fab7048f-5f5e-4599-b853-ab66e03cae89","Type":"ContainerStarted","Data":"76e11f2f5644c93f31cfbd9fbefa073d9fe8bc7744b353102fc81c30fc3be366"} Jan 28 10:05:01 crc kubenswrapper[4735]: I0128 10:05:01.621417 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5514390d-3d1c-45af-b063-0ba631c09e43","Type":"ContainerStarted","Data":"4303f4f27c31584baad605e37b0ea6a325ed1686d9682984dff083ed06151100"} Jan 28 10:05:01 crc kubenswrapper[4735]: I0128 10:05:01.624531 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bbefc8b9-3deb-4ae1-a798-1e95f1014380","Type":"ContainerStarted","Data":"a2922c2dff6192d9c791cba121a8e1568217dc88c26f8f966e4afc5c90909b02"} Jan 28 10:05:01 crc kubenswrapper[4735]: I0128 10:05:01.645951 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" podStartSLOduration=3.6459347490000003 podStartE2EDuration="3.645934749s" podCreationTimestamp="2026-01-28 10:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:05:01.641603097 +0000 UTC m=+1354.791267500" watchObservedRunningTime="2026-01-28 10:05:01.645934749 +0000 UTC m=+1354.795599122" Jan 28 10:05:02 crc kubenswrapper[4735]: I0128 10:05:02.636942 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:05:03 crc kubenswrapper[4735]: I0128 10:05:03.651810 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5514390d-3d1c-45af-b063-0ba631c09e43","Type":"ContainerStarted","Data":"921df6aaab1566621366d6edb5a7597fc044b1a8f6609c3ac5419f4082e6c63d"} Jan 28 10:05:05 crc kubenswrapper[4735]: I0128 10:05:05.430502 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:05:05 crc kubenswrapper[4735]: I0128 10:05:05.431059 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:05:05 crc kubenswrapper[4735]: I0128 10:05:05.431115 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 10:05:05 crc kubenswrapper[4735]: I0128 10:05:05.431928 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1c45816a1405ea6213a313d884201ef5ccf1af72f7151ce23dc0ce396519b28"} pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 10:05:05 crc kubenswrapper[4735]: I0128 10:05:05.431997 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" containerID="cri-o://c1c45816a1405ea6213a313d884201ef5ccf1af72f7151ce23dc0ce396519b28" gracePeriod=600 Jan 28 10:05:05 crc kubenswrapper[4735]: I0128 10:05:05.671611 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerID="c1c45816a1405ea6213a313d884201ef5ccf1af72f7151ce23dc0ce396519b28" exitCode=0 Jan 28 10:05:05 crc kubenswrapper[4735]: I0128 10:05:05.671798 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerDied","Data":"c1c45816a1405ea6213a313d884201ef5ccf1af72f7151ce23dc0ce396519b28"} Jan 28 10:05:05 crc kubenswrapper[4735]: I0128 10:05:05.672063 4735 scope.go:117] "RemoveContainer" containerID="e4a04c510b9b270008568de903a447cbfe28b5ec6b75521aed00c265673e243e" Jan 28 10:05:06 crc kubenswrapper[4735]: I0128 10:05:06.685449 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b"} Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.114771 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.239017 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-688kc"] Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.239316 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" podUID="e5699a8e-2a07-4f65-9512-2d434ff1f8ed" containerName="dnsmasq-dns" containerID="cri-o://93ccf08dea4900791731db9a6ac67a9da70d0cb940fafd8969c2db0d896d92e8" gracePeriod=10 Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.342668 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-dpqvs"] Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.344622 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.358712 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-dpqvs"] Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.483673 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-config\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.483709 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.483824 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.483859 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.483894 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.483935 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.483956 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4hdr\" (UniqueName: \"kubernetes.io/projected/05f5d1ad-e9d0-4565-be27-0ad67a068056-kube-api-access-x4hdr\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.585902 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.586154 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.586194 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.586217 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4hdr\" (UniqueName: \"kubernetes.io/projected/05f5d1ad-e9d0-4565-be27-0ad67a068056-kube-api-access-x4hdr\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.586247 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-config\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.586263 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.586342 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.586762 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.586983 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.587472 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-config\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.587564 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.587996 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.588185 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05f5d1ad-e9d0-4565-be27-0ad67a068056-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.605089 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4hdr\" (UniqueName: \"kubernetes.io/projected/05f5d1ad-e9d0-4565-be27-0ad67a068056-kube-api-access-x4hdr\") pod \"dnsmasq-dns-cb6ffcf87-dpqvs\" (UID: \"05f5d1ad-e9d0-4565-be27-0ad67a068056\") " pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.694862 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.712094 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.741873 4735 generic.go:334] "Generic (PLEG): container finished" podID="e5699a8e-2a07-4f65-9512-2d434ff1f8ed" containerID="93ccf08dea4900791731db9a6ac67a9da70d0cb940fafd8969c2db0d896d92e8" exitCode=0 Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.741944 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" event={"ID":"e5699a8e-2a07-4f65-9512-2d434ff1f8ed","Type":"ContainerDied","Data":"93ccf08dea4900791731db9a6ac67a9da70d0cb940fafd8969c2db0d896d92e8"} Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.741962 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.741980 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-688kc" event={"ID":"e5699a8e-2a07-4f65-9512-2d434ff1f8ed","Type":"ContainerDied","Data":"10a333a65f88407ea52b6a7320eb539c07f9d7c82d2e8ebe17e75df8f4ef994e"} Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.742005 4735 scope.go:117] "RemoveContainer" containerID="93ccf08dea4900791731db9a6ac67a9da70d0cb940fafd8969c2db0d896d92e8" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.789542 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-dns-swift-storage-0\") pod \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.789666 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-config\") pod \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.789831 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-ovsdbserver-sb\") pod \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.789857 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg69l\" (UniqueName: \"kubernetes.io/projected/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-kube-api-access-tg69l\") pod \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.789915 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-dns-svc\") pod \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.789953 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-ovsdbserver-nb\") pod \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\" (UID: \"e5699a8e-2a07-4f65-9512-2d434ff1f8ed\") " Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.800836 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-kube-api-access-tg69l" (OuterVolumeSpecName: "kube-api-access-tg69l") pod "e5699a8e-2a07-4f65-9512-2d434ff1f8ed" (UID: "e5699a8e-2a07-4f65-9512-2d434ff1f8ed"). InnerVolumeSpecName "kube-api-access-tg69l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.849612 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e5699a8e-2a07-4f65-9512-2d434ff1f8ed" (UID: "e5699a8e-2a07-4f65-9512-2d434ff1f8ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.852288 4735 scope.go:117] "RemoveContainer" containerID="9ff2b08081c8d92562c62a7b340c11a9fd50ee84a6755073b7f2e72edecae0bb" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.852665 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e5699a8e-2a07-4f65-9512-2d434ff1f8ed" (UID: "e5699a8e-2a07-4f65-9512-2d434ff1f8ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.855540 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-config" (OuterVolumeSpecName: "config") pod "e5699a8e-2a07-4f65-9512-2d434ff1f8ed" (UID: "e5699a8e-2a07-4f65-9512-2d434ff1f8ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.856959 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e5699a8e-2a07-4f65-9512-2d434ff1f8ed" (UID: "e5699a8e-2a07-4f65-9512-2d434ff1f8ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.859153 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e5699a8e-2a07-4f65-9512-2d434ff1f8ed" (UID: "e5699a8e-2a07-4f65-9512-2d434ff1f8ed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.877250 4735 scope.go:117] "RemoveContainer" containerID="93ccf08dea4900791731db9a6ac67a9da70d0cb940fafd8969c2db0d896d92e8" Jan 28 10:05:09 crc kubenswrapper[4735]: E0128 10:05:09.877770 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93ccf08dea4900791731db9a6ac67a9da70d0cb940fafd8969c2db0d896d92e8\": container with ID starting with 93ccf08dea4900791731db9a6ac67a9da70d0cb940fafd8969c2db0d896d92e8 not found: ID does not exist" containerID="93ccf08dea4900791731db9a6ac67a9da70d0cb940fafd8969c2db0d896d92e8" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.877815 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ccf08dea4900791731db9a6ac67a9da70d0cb940fafd8969c2db0d896d92e8"} err="failed to get container status \"93ccf08dea4900791731db9a6ac67a9da70d0cb940fafd8969c2db0d896d92e8\": rpc error: code = NotFound desc = could not find container \"93ccf08dea4900791731db9a6ac67a9da70d0cb940fafd8969c2db0d896d92e8\": container with ID starting with 93ccf08dea4900791731db9a6ac67a9da70d0cb940fafd8969c2db0d896d92e8 not found: ID does not exist" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.877865 4735 scope.go:117] "RemoveContainer" containerID="9ff2b08081c8d92562c62a7b340c11a9fd50ee84a6755073b7f2e72edecae0bb" Jan 28 10:05:09 crc kubenswrapper[4735]: E0128 10:05:09.878209 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ff2b08081c8d92562c62a7b340c11a9fd50ee84a6755073b7f2e72edecae0bb\": container with ID starting with 9ff2b08081c8d92562c62a7b340c11a9fd50ee84a6755073b7f2e72edecae0bb not found: ID does not exist" containerID="9ff2b08081c8d92562c62a7b340c11a9fd50ee84a6755073b7f2e72edecae0bb" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.878230 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ff2b08081c8d92562c62a7b340c11a9fd50ee84a6755073b7f2e72edecae0bb"} err="failed to get container status \"9ff2b08081c8d92562c62a7b340c11a9fd50ee84a6755073b7f2e72edecae0bb\": rpc error: code = NotFound desc = could not find container \"9ff2b08081c8d92562c62a7b340c11a9fd50ee84a6755073b7f2e72edecae0bb\": container with ID starting with 9ff2b08081c8d92562c62a7b340c11a9fd50ee84a6755073b7f2e72edecae0bb not found: ID does not exist" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.898325 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.898354 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.898366 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg69l\" (UniqueName: \"kubernetes.io/projected/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-kube-api-access-tg69l\") on node \"crc\" DevicePath \"\"" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.898375 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.898383 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 10:05:09 crc kubenswrapper[4735]: I0128 10:05:09.898391 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5699a8e-2a07-4f65-9512-2d434ff1f8ed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:05:10 crc kubenswrapper[4735]: I0128 10:05:10.075198 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-688kc"] Jan 28 10:05:10 crc kubenswrapper[4735]: I0128 10:05:10.088235 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-688kc"] Jan 28 10:05:10 crc kubenswrapper[4735]: I0128 10:05:10.165180 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-dpqvs"] Jan 28 10:05:10 crc kubenswrapper[4735]: W0128 10:05:10.173370 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05f5d1ad_e9d0_4565_be27_0ad67a068056.slice/crio-a39ff17a1c5e2cb4d33e63341f6b82cfa21aeaf654706329789f45f36767dd5c WatchSource:0}: Error finding container a39ff17a1c5e2cb4d33e63341f6b82cfa21aeaf654706329789f45f36767dd5c: Status 404 returned error can't find the container with id a39ff17a1c5e2cb4d33e63341f6b82cfa21aeaf654706329789f45f36767dd5c Jan 28 10:05:10 crc kubenswrapper[4735]: I0128 10:05:10.755136 4735 generic.go:334] "Generic (PLEG): container finished" podID="05f5d1ad-e9d0-4565-be27-0ad67a068056" containerID="2cf469f5ebc7d20e5425d09bd239fdc5abc8b2af2bcbf78cf34b593081300c58" exitCode=0 Jan 28 10:05:10 crc kubenswrapper[4735]: I0128 10:05:10.755221 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" event={"ID":"05f5d1ad-e9d0-4565-be27-0ad67a068056","Type":"ContainerDied","Data":"2cf469f5ebc7d20e5425d09bd239fdc5abc8b2af2bcbf78cf34b593081300c58"} Jan 28 10:05:10 crc kubenswrapper[4735]: I0128 10:05:10.755642 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" event={"ID":"05f5d1ad-e9d0-4565-be27-0ad67a068056","Type":"ContainerStarted","Data":"a39ff17a1c5e2cb4d33e63341f6b82cfa21aeaf654706329789f45f36767dd5c"} Jan 28 10:05:11 crc kubenswrapper[4735]: I0128 10:05:11.516377 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5699a8e-2a07-4f65-9512-2d434ff1f8ed" path="/var/lib/kubelet/pods/e5699a8e-2a07-4f65-9512-2d434ff1f8ed/volumes" Jan 28 10:05:11 crc kubenswrapper[4735]: I0128 10:05:11.768635 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" event={"ID":"05f5d1ad-e9d0-4565-be27-0ad67a068056","Type":"ContainerStarted","Data":"0897f28dc8bce2ba8e523b0b383e34c440e9b8325885f7d020b8cd621b3c3a62"} Jan 28 10:05:11 crc kubenswrapper[4735]: I0128 10:05:11.769011 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:11 crc kubenswrapper[4735]: I0128 10:05:11.811520 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" podStartSLOduration=2.811499462 podStartE2EDuration="2.811499462s" podCreationTimestamp="2026-01-28 10:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:05:11.79572014 +0000 UTC m=+1364.945384513" watchObservedRunningTime="2026-01-28 10:05:11.811499462 +0000 UTC m=+1364.961163835" Jan 28 10:05:19 crc kubenswrapper[4735]: I0128 10:05:19.713908 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb6ffcf87-dpqvs" Jan 28 10:05:19 crc kubenswrapper[4735]: I0128 10:05:19.787921 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-vcd9g"] Jan 28 10:05:19 crc kubenswrapper[4735]: I0128 10:05:19.788181 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" podUID="fab7048f-5f5e-4599-b853-ab66e03cae89" containerName="dnsmasq-dns" containerID="cri-o://76e11f2f5644c93f31cfbd9fbefa073d9fe8bc7744b353102fc81c30fc3be366" gracePeriod=10 Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.398153 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.499696 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-ovsdbserver-sb\") pod \"fab7048f-5f5e-4599-b853-ab66e03cae89\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.499876 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-dns-swift-storage-0\") pod \"fab7048f-5f5e-4599-b853-ab66e03cae89\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.499921 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-dns-svc\") pod \"fab7048f-5f5e-4599-b853-ab66e03cae89\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.499991 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-config\") pod \"fab7048f-5f5e-4599-b853-ab66e03cae89\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.500041 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srjv2\" (UniqueName: \"kubernetes.io/projected/fab7048f-5f5e-4599-b853-ab66e03cae89-kube-api-access-srjv2\") pod \"fab7048f-5f5e-4599-b853-ab66e03cae89\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.500098 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-openstack-edpm-ipam\") pod \"fab7048f-5f5e-4599-b853-ab66e03cae89\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.500131 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-ovsdbserver-nb\") pod \"fab7048f-5f5e-4599-b853-ab66e03cae89\" (UID: \"fab7048f-5f5e-4599-b853-ab66e03cae89\") " Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.529818 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fab7048f-5f5e-4599-b853-ab66e03cae89-kube-api-access-srjv2" (OuterVolumeSpecName: "kube-api-access-srjv2") pod "fab7048f-5f5e-4599-b853-ab66e03cae89" (UID: "fab7048f-5f5e-4599-b853-ab66e03cae89"). InnerVolumeSpecName "kube-api-access-srjv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.574507 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fab7048f-5f5e-4599-b853-ab66e03cae89" (UID: "fab7048f-5f5e-4599-b853-ab66e03cae89"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.582495 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-config" (OuterVolumeSpecName: "config") pod "fab7048f-5f5e-4599-b853-ab66e03cae89" (UID: "fab7048f-5f5e-4599-b853-ab66e03cae89"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.593288 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fab7048f-5f5e-4599-b853-ab66e03cae89" (UID: "fab7048f-5f5e-4599-b853-ab66e03cae89"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.597726 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "fab7048f-5f5e-4599-b853-ab66e03cae89" (UID: "fab7048f-5f5e-4599-b853-ab66e03cae89"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.602186 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.602213 4735 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.602224 4735 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.602233 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srjv2\" (UniqueName: \"kubernetes.io/projected/fab7048f-5f5e-4599-b853-ab66e03cae89-kube-api-access-srjv2\") on node \"crc\" DevicePath \"\"" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.602244 4735 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.608112 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fab7048f-5f5e-4599-b853-ab66e03cae89" (UID: "fab7048f-5f5e-4599-b853-ab66e03cae89"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.628431 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fab7048f-5f5e-4599-b853-ab66e03cae89" (UID: "fab7048f-5f5e-4599-b853-ab66e03cae89"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.703738 4735 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.703786 4735 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fab7048f-5f5e-4599-b853-ab66e03cae89-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.865847 4735 generic.go:334] "Generic (PLEG): container finished" podID="fab7048f-5f5e-4599-b853-ab66e03cae89" containerID="76e11f2f5644c93f31cfbd9fbefa073d9fe8bc7744b353102fc81c30fc3be366" exitCode=0 Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.865925 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" event={"ID":"fab7048f-5f5e-4599-b853-ab66e03cae89","Type":"ContainerDied","Data":"76e11f2f5644c93f31cfbd9fbefa073d9fe8bc7744b353102fc81c30fc3be366"} Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.866935 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" event={"ID":"fab7048f-5f5e-4599-b853-ab66e03cae89","Type":"ContainerDied","Data":"146f4bf401cf54cb5516dc196a3c49071f7eec42bfb9a503eafc31e96d4f3f7f"} Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.865977 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-vcd9g" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.866973 4735 scope.go:117] "RemoveContainer" containerID="76e11f2f5644c93f31cfbd9fbefa073d9fe8bc7744b353102fc81c30fc3be366" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.892641 4735 scope.go:117] "RemoveContainer" containerID="895de2f74a63c1d95cc1bcf3de3529d2f8e8b831be9236a5c80dc9d8ef435faa" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.915413 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-vcd9g"] Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.925541 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-vcd9g"] Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.939080 4735 scope.go:117] "RemoveContainer" containerID="76e11f2f5644c93f31cfbd9fbefa073d9fe8bc7744b353102fc81c30fc3be366" Jan 28 10:05:20 crc kubenswrapper[4735]: E0128 10:05:20.941586 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76e11f2f5644c93f31cfbd9fbefa073d9fe8bc7744b353102fc81c30fc3be366\": container with ID starting with 76e11f2f5644c93f31cfbd9fbefa073d9fe8bc7744b353102fc81c30fc3be366 not found: ID does not exist" containerID="76e11f2f5644c93f31cfbd9fbefa073d9fe8bc7744b353102fc81c30fc3be366" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.941676 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76e11f2f5644c93f31cfbd9fbefa073d9fe8bc7744b353102fc81c30fc3be366"} err="failed to get container status \"76e11f2f5644c93f31cfbd9fbefa073d9fe8bc7744b353102fc81c30fc3be366\": rpc error: code = NotFound desc = could not find container \"76e11f2f5644c93f31cfbd9fbefa073d9fe8bc7744b353102fc81c30fc3be366\": container with ID starting with 76e11f2f5644c93f31cfbd9fbefa073d9fe8bc7744b353102fc81c30fc3be366 not found: ID does not exist" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.941766 4735 scope.go:117] "RemoveContainer" containerID="895de2f74a63c1d95cc1bcf3de3529d2f8e8b831be9236a5c80dc9d8ef435faa" Jan 28 10:05:20 crc kubenswrapper[4735]: E0128 10:05:20.942123 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"895de2f74a63c1d95cc1bcf3de3529d2f8e8b831be9236a5c80dc9d8ef435faa\": container with ID starting with 895de2f74a63c1d95cc1bcf3de3529d2f8e8b831be9236a5c80dc9d8ef435faa not found: ID does not exist" containerID="895de2f74a63c1d95cc1bcf3de3529d2f8e8b831be9236a5c80dc9d8ef435faa" Jan 28 10:05:20 crc kubenswrapper[4735]: I0128 10:05:20.942204 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"895de2f74a63c1d95cc1bcf3de3529d2f8e8b831be9236a5c80dc9d8ef435faa"} err="failed to get container status \"895de2f74a63c1d95cc1bcf3de3529d2f8e8b831be9236a5c80dc9d8ef435faa\": rpc error: code = NotFound desc = could not find container \"895de2f74a63c1d95cc1bcf3de3529d2f8e8b831be9236a5c80dc9d8ef435faa\": container with ID starting with 895de2f74a63c1d95cc1bcf3de3529d2f8e8b831be9236a5c80dc9d8ef435faa not found: ID does not exist" Jan 28 10:05:21 crc kubenswrapper[4735]: I0128 10:05:21.517942 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fab7048f-5f5e-4599-b853-ab66e03cae89" path="/var/lib/kubelet/pods/fab7048f-5f5e-4599-b853-ab66e03cae89/volumes" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.743844 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz"] Jan 28 10:05:32 crc kubenswrapper[4735]: E0128 10:05:32.744988 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab7048f-5f5e-4599-b853-ab66e03cae89" containerName="dnsmasq-dns" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.745031 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab7048f-5f5e-4599-b853-ab66e03cae89" containerName="dnsmasq-dns" Jan 28 10:05:32 crc kubenswrapper[4735]: E0128 10:05:32.745043 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab7048f-5f5e-4599-b853-ab66e03cae89" containerName="init" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.745051 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab7048f-5f5e-4599-b853-ab66e03cae89" containerName="init" Jan 28 10:05:32 crc kubenswrapper[4735]: E0128 10:05:32.745070 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5699a8e-2a07-4f65-9512-2d434ff1f8ed" containerName="dnsmasq-dns" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.745078 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5699a8e-2a07-4f65-9512-2d434ff1f8ed" containerName="dnsmasq-dns" Jan 28 10:05:32 crc kubenswrapper[4735]: E0128 10:05:32.745097 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5699a8e-2a07-4f65-9512-2d434ff1f8ed" containerName="init" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.745105 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5699a8e-2a07-4f65-9512-2d434ff1f8ed" containerName="init" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.745344 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5699a8e-2a07-4f65-9512-2d434ff1f8ed" containerName="dnsmasq-dns" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.745366 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="fab7048f-5f5e-4599-b853-ab66e03cae89" containerName="dnsmasq-dns" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.746218 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.748113 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.748279 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.748303 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.748896 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j996r" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.753681 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz"] Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.762010 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz\" (UID: \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.762061 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz\" (UID: \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.762133 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz\" (UID: \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.762210 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc6d7\" (UniqueName: \"kubernetes.io/projected/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-kube-api-access-bc6d7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz\" (UID: \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.863670 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz\" (UID: \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.863974 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc6d7\" (UniqueName: \"kubernetes.io/projected/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-kube-api-access-bc6d7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz\" (UID: \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.864037 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz\" (UID: \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.864062 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz\" (UID: \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.870360 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz\" (UID: \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.872021 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz\" (UID: \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.872423 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz\" (UID: \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" Jan 28 10:05:32 crc kubenswrapper[4735]: I0128 10:05:32.886858 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc6d7\" (UniqueName: \"kubernetes.io/projected/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-kube-api-access-bc6d7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz\" (UID: \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" Jan 28 10:05:33 crc kubenswrapper[4735]: I0128 10:05:33.071703 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" Jan 28 10:05:33 crc kubenswrapper[4735]: I0128 10:05:33.589393 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz"] Jan 28 10:05:33 crc kubenswrapper[4735]: I0128 10:05:33.595498 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 10:05:33 crc kubenswrapper[4735]: I0128 10:05:33.998656 4735 generic.go:334] "Generic (PLEG): container finished" podID="bbefc8b9-3deb-4ae1-a798-1e95f1014380" containerID="a2922c2dff6192d9c791cba121a8e1568217dc88c26f8f966e4afc5c90909b02" exitCode=0 Jan 28 10:05:33 crc kubenswrapper[4735]: I0128 10:05:33.998739 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bbefc8b9-3deb-4ae1-a798-1e95f1014380","Type":"ContainerDied","Data":"a2922c2dff6192d9c791cba121a8e1568217dc88c26f8f966e4afc5c90909b02"} Jan 28 10:05:34 crc kubenswrapper[4735]: I0128 10:05:34.000858 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" event={"ID":"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab","Type":"ContainerStarted","Data":"ebeda11ae33fe124a2cbd7eb1e14b09b49a1523afd8c0fc3c8125ad461d07ec8"} Jan 28 10:05:35 crc kubenswrapper[4735]: I0128 10:05:35.013739 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bbefc8b9-3deb-4ae1-a798-1e95f1014380","Type":"ContainerStarted","Data":"a3f1a2b280ebb6e704ee6a5e09cf94a450eec61713d28b5f2a347acb316effb2"} Jan 28 10:05:35 crc kubenswrapper[4735]: I0128 10:05:35.015890 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 10:05:35 crc kubenswrapper[4735]: I0128 10:05:35.051287 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.051267101 podStartE2EDuration="37.051267101s" podCreationTimestamp="2026-01-28 10:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:05:35.038625003 +0000 UTC m=+1388.188289386" watchObservedRunningTime="2026-01-28 10:05:35.051267101 +0000 UTC m=+1388.200931464" Jan 28 10:05:36 crc kubenswrapper[4735]: I0128 10:05:36.026687 4735 generic.go:334] "Generic (PLEG): container finished" podID="5514390d-3d1c-45af-b063-0ba631c09e43" containerID="921df6aaab1566621366d6edb5a7597fc044b1a8f6609c3ac5419f4082e6c63d" exitCode=0 Jan 28 10:05:36 crc kubenswrapper[4735]: I0128 10:05:36.026796 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5514390d-3d1c-45af-b063-0ba631c09e43","Type":"ContainerDied","Data":"921df6aaab1566621366d6edb5a7597fc044b1a8f6609c3ac5419f4082e6c63d"} Jan 28 10:05:37 crc kubenswrapper[4735]: I0128 10:05:37.054938 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5514390d-3d1c-45af-b063-0ba631c09e43","Type":"ContainerStarted","Data":"7ac995330e2fe97a6bff91ea88aa15a6c88ccf2007cbcfdf1a1ef61bbb4e9c0e"} Jan 28 10:05:37 crc kubenswrapper[4735]: I0128 10:05:37.055862 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:37 crc kubenswrapper[4735]: I0128 10:05:37.093379 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.09335515 podStartE2EDuration="38.09335515s" podCreationTimestamp="2026-01-28 10:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:05:37.075685443 +0000 UTC m=+1390.225349826" watchObservedRunningTime="2026-01-28 10:05:37.09335515 +0000 UTC m=+1390.243019523" Jan 28 10:05:44 crc kubenswrapper[4735]: I0128 10:05:44.138301 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" event={"ID":"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab","Type":"ContainerStarted","Data":"d336854e4eb0cd8b88e7be67651de7e1f990a09fa63b54363d64ec4685111303"} Jan 28 10:05:44 crc kubenswrapper[4735]: I0128 10:05:44.154577 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" podStartSLOduration=2.072525849 podStartE2EDuration="12.154561576s" podCreationTimestamp="2026-01-28 10:05:32 +0000 UTC" firstStartedPulling="2026-01-28 10:05:33.595236693 +0000 UTC m=+1386.744901066" lastFinishedPulling="2026-01-28 10:05:43.67727242 +0000 UTC m=+1396.826936793" observedRunningTime="2026-01-28 10:05:44.152706803 +0000 UTC m=+1397.302371176" watchObservedRunningTime="2026-01-28 10:05:44.154561576 +0000 UTC m=+1397.304225949" Jan 28 10:05:48 crc kubenswrapper[4735]: I0128 10:05:48.548848 4735 scope.go:117] "RemoveContainer" containerID="22cfdb56f8ec3e4e521690a9202091987ed8bf8a47ba23e2ea25ccd2e53ef809" Jan 28 10:05:49 crc kubenswrapper[4735]: I0128 10:05:49.298296 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 10:05:50 crc kubenswrapper[4735]: I0128 10:05:50.362872 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 10:05:55 crc kubenswrapper[4735]: I0128 10:05:55.245007 4735 generic.go:334] "Generic (PLEG): container finished" podID="1b4ed502-7077-42e3-b1a5-a70f4d8c2aab" containerID="d336854e4eb0cd8b88e7be67651de7e1f990a09fa63b54363d64ec4685111303" exitCode=0 Jan 28 10:05:55 crc kubenswrapper[4735]: I0128 10:05:55.245098 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" event={"ID":"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab","Type":"ContainerDied","Data":"d336854e4eb0cd8b88e7be67651de7e1f990a09fa63b54363d64ec4685111303"} Jan 28 10:05:56 crc kubenswrapper[4735]: I0128 10:05:56.621080 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" Jan 28 10:05:56 crc kubenswrapper[4735]: I0128 10:05:56.716912 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc6d7\" (UniqueName: \"kubernetes.io/projected/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-kube-api-access-bc6d7\") pod \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\" (UID: \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\") " Jan 28 10:05:56 crc kubenswrapper[4735]: I0128 10:05:56.717437 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-repo-setup-combined-ca-bundle\") pod \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\" (UID: \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\") " Jan 28 10:05:56 crc kubenswrapper[4735]: I0128 10:05:56.717475 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-inventory\") pod \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\" (UID: \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\") " Jan 28 10:05:56 crc kubenswrapper[4735]: I0128 10:05:56.717523 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-ssh-key-openstack-edpm-ipam\") pod \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\" (UID: \"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab\") " Jan 28 10:05:56 crc kubenswrapper[4735]: I0128 10:05:56.722631 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-kube-api-access-bc6d7" (OuterVolumeSpecName: "kube-api-access-bc6d7") pod "1b4ed502-7077-42e3-b1a5-a70f4d8c2aab" (UID: "1b4ed502-7077-42e3-b1a5-a70f4d8c2aab"). InnerVolumeSpecName "kube-api-access-bc6d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:05:56 crc kubenswrapper[4735]: I0128 10:05:56.722820 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1b4ed502-7077-42e3-b1a5-a70f4d8c2aab" (UID: "1b4ed502-7077-42e3-b1a5-a70f4d8c2aab"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:05:56 crc kubenswrapper[4735]: I0128 10:05:56.742675 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-inventory" (OuterVolumeSpecName: "inventory") pod "1b4ed502-7077-42e3-b1a5-a70f4d8c2aab" (UID: "1b4ed502-7077-42e3-b1a5-a70f4d8c2aab"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:05:56 crc kubenswrapper[4735]: I0128 10:05:56.745561 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1b4ed502-7077-42e3-b1a5-a70f4d8c2aab" (UID: "1b4ed502-7077-42e3-b1a5-a70f4d8c2aab"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:05:56 crc kubenswrapper[4735]: I0128 10:05:56.820222 4735 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:05:56 crc kubenswrapper[4735]: I0128 10:05:56.820251 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 10:05:56 crc kubenswrapper[4735]: I0128 10:05:56.820261 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:05:56 crc kubenswrapper[4735]: I0128 10:05:56.820271 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc6d7\" (UniqueName: \"kubernetes.io/projected/1b4ed502-7077-42e3-b1a5-a70f4d8c2aab-kube-api-access-bc6d7\") on node \"crc\" DevicePath \"\"" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.263663 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" event={"ID":"1b4ed502-7077-42e3-b1a5-a70f4d8c2aab","Type":"ContainerDied","Data":"ebeda11ae33fe124a2cbd7eb1e14b09b49a1523afd8c0fc3c8125ad461d07ec8"} Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.263726 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebeda11ae33fe124a2cbd7eb1e14b09b49a1523afd8c0fc3c8125ad461d07ec8" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.263729 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.338552 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99"] Jan 28 10:05:57 crc kubenswrapper[4735]: E0128 10:05:57.339213 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b4ed502-7077-42e3-b1a5-a70f4d8c2aab" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.339246 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b4ed502-7077-42e3-b1a5-a70f4d8c2aab" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.339580 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b4ed502-7077-42e3-b1a5-a70f4d8c2aab" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.340519 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.345225 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.345527 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.346205 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j996r" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.346240 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.352306 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99"] Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.432397 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l4f99\" (UID: \"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.432486 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l4f99\" (UID: \"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.432612 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dns2\" (UniqueName: \"kubernetes.io/projected/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-kube-api-access-2dns2\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l4f99\" (UID: \"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.534872 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l4f99\" (UID: \"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.534922 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dns2\" (UniqueName: \"kubernetes.io/projected/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-kube-api-access-2dns2\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l4f99\" (UID: \"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.535050 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l4f99\" (UID: \"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.540923 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l4f99\" (UID: \"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.549126 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l4f99\" (UID: \"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.553248 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dns2\" (UniqueName: \"kubernetes.io/projected/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-kube-api-access-2dns2\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-l4f99\" (UID: \"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" Jan 28 10:05:57 crc kubenswrapper[4735]: I0128 10:05:57.664872 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" Jan 28 10:05:58 crc kubenswrapper[4735]: I0128 10:05:58.191058 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99"] Jan 28 10:05:58 crc kubenswrapper[4735]: I0128 10:05:58.283090 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" event={"ID":"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee","Type":"ContainerStarted","Data":"1e2a542c332514c7a30988c565d2b6be9c6ad04b9ef6bcecfc799080adcc19b8"} Jan 28 10:06:00 crc kubenswrapper[4735]: I0128 10:06:00.303557 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" event={"ID":"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee","Type":"ContainerStarted","Data":"aab6720680a7582a008cf9699d40d7cc11d789a1e2c325d17cbb3a4fcfe0eb20"} Jan 28 10:06:00 crc kubenswrapper[4735]: I0128 10:06:00.326212 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" podStartSLOduration=1.774027621 podStartE2EDuration="3.326192386s" podCreationTimestamp="2026-01-28 10:05:57 +0000 UTC" firstStartedPulling="2026-01-28 10:05:58.193675475 +0000 UTC m=+1411.343339848" lastFinishedPulling="2026-01-28 10:05:59.74584024 +0000 UTC m=+1412.895504613" observedRunningTime="2026-01-28 10:06:00.318954006 +0000 UTC m=+1413.468618389" watchObservedRunningTime="2026-01-28 10:06:00.326192386 +0000 UTC m=+1413.475856759" Jan 28 10:06:03 crc kubenswrapper[4735]: I0128 10:06:03.331201 4735 generic.go:334] "Generic (PLEG): container finished" podID="84ad913f-0b4f-4644-8aeb-b85bfc1da4ee" containerID="aab6720680a7582a008cf9699d40d7cc11d789a1e2c325d17cbb3a4fcfe0eb20" exitCode=0 Jan 28 10:06:03 crc kubenswrapper[4735]: I0128 10:06:03.331328 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" event={"ID":"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee","Type":"ContainerDied","Data":"aab6720680a7582a008cf9699d40d7cc11d789a1e2c325d17cbb3a4fcfe0eb20"} Jan 28 10:06:04 crc kubenswrapper[4735]: I0128 10:06:04.732938 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" Jan 28 10:06:04 crc kubenswrapper[4735]: I0128 10:06:04.917937 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-inventory\") pod \"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee\" (UID: \"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee\") " Jan 28 10:06:04 crc kubenswrapper[4735]: I0128 10:06:04.918124 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dns2\" (UniqueName: \"kubernetes.io/projected/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-kube-api-access-2dns2\") pod \"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee\" (UID: \"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee\") " Jan 28 10:06:04 crc kubenswrapper[4735]: I0128 10:06:04.918172 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-ssh-key-openstack-edpm-ipam\") pod \"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee\" (UID: \"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee\") " Jan 28 10:06:04 crc kubenswrapper[4735]: I0128 10:06:04.926120 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-kube-api-access-2dns2" (OuterVolumeSpecName: "kube-api-access-2dns2") pod "84ad913f-0b4f-4644-8aeb-b85bfc1da4ee" (UID: "84ad913f-0b4f-4644-8aeb-b85bfc1da4ee"). InnerVolumeSpecName "kube-api-access-2dns2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:06:04 crc kubenswrapper[4735]: I0128 10:06:04.952966 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-inventory" (OuterVolumeSpecName: "inventory") pod "84ad913f-0b4f-4644-8aeb-b85bfc1da4ee" (UID: "84ad913f-0b4f-4644-8aeb-b85bfc1da4ee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:06:04 crc kubenswrapper[4735]: I0128 10:06:04.973706 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "84ad913f-0b4f-4644-8aeb-b85bfc1da4ee" (UID: "84ad913f-0b4f-4644-8aeb-b85bfc1da4ee"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.021254 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.021310 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dns2\" (UniqueName: \"kubernetes.io/projected/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-kube-api-access-2dns2\") on node \"crc\" DevicePath \"\"" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.021331 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84ad913f-0b4f-4644-8aeb-b85bfc1da4ee-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.346682 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" event={"ID":"84ad913f-0b4f-4644-8aeb-b85bfc1da4ee","Type":"ContainerDied","Data":"1e2a542c332514c7a30988c565d2b6be9c6ad04b9ef6bcecfc799080adcc19b8"} Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.346720 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-l4f99" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.346720 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e2a542c332514c7a30988c565d2b6be9c6ad04b9ef6bcecfc799080adcc19b8" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.415296 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb"] Jan 28 10:06:05 crc kubenswrapper[4735]: E0128 10:06:05.415685 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ad913f-0b4f-4644-8aeb-b85bfc1da4ee" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.415700 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ad913f-0b4f-4644-8aeb-b85bfc1da4ee" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.415994 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="84ad913f-0b4f-4644-8aeb-b85bfc1da4ee" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.416680 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.420467 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.420649 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.420703 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.420813 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j996r" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.430257 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb"] Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.530407 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb\" (UID: \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.530839 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq527\" (UniqueName: \"kubernetes.io/projected/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-kube-api-access-zq527\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb\" (UID: \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.530965 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb\" (UID: \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.531073 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb\" (UID: \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.633891 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb\" (UID: \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.633967 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb\" (UID: \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.634184 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb\" (UID: \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.634271 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq527\" (UniqueName: \"kubernetes.io/projected/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-kube-api-access-zq527\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb\" (UID: \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.638710 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb\" (UID: \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.639199 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb\" (UID: \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.639340 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb\" (UID: \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.660281 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq527\" (UniqueName: \"kubernetes.io/projected/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-kube-api-access-zq527\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb\" (UID: \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" Jan 28 10:06:05 crc kubenswrapper[4735]: I0128 10:06:05.774620 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" Jan 28 10:06:06 crc kubenswrapper[4735]: I0128 10:06:06.301934 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb"] Jan 28 10:06:06 crc kubenswrapper[4735]: I0128 10:06:06.355612 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" event={"ID":"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd","Type":"ContainerStarted","Data":"25c9872e70719bbb51c197dc540ab592a3f0c9fadf8b9c1b47125831e9b01ffb"} Jan 28 10:06:07 crc kubenswrapper[4735]: I0128 10:06:07.365924 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" event={"ID":"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd","Type":"ContainerStarted","Data":"bc18a59897a79b120b9e910d79b51db34fcc014413b7cae1fbf681e69588a42d"} Jan 28 10:06:07 crc kubenswrapper[4735]: I0128 10:06:07.395622 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" podStartSLOduration=1.97434202 podStartE2EDuration="2.395599745s" podCreationTimestamp="2026-01-28 10:06:05 +0000 UTC" firstStartedPulling="2026-01-28 10:06:06.311943309 +0000 UTC m=+1419.461607702" lastFinishedPulling="2026-01-28 10:06:06.733201013 +0000 UTC m=+1419.882865427" observedRunningTime="2026-01-28 10:06:07.383984001 +0000 UTC m=+1420.533648404" watchObservedRunningTime="2026-01-28 10:06:07.395599745 +0000 UTC m=+1420.545264128" Jan 28 10:06:48 crc kubenswrapper[4735]: I0128 10:06:48.681266 4735 scope.go:117] "RemoveContainer" containerID="b8bfd489d50d55933182c7098197cd6d387579ab42190cfec71fb6a4b17a8b2d" Jan 28 10:06:48 crc kubenswrapper[4735]: I0128 10:06:48.725823 4735 scope.go:117] "RemoveContainer" containerID="24e45cdaa971a1e834c2b55e2121b5958f39c764e37963c8c405b9ecd00cdec4" Jan 28 10:06:48 crc kubenswrapper[4735]: I0128 10:06:48.778447 4735 scope.go:117] "RemoveContainer" containerID="550cc2abfbd713b4a2ca9ed601f8b42d468f50ede096d6cd369190d22864ce5b" Jan 28 10:07:05 crc kubenswrapper[4735]: I0128 10:07:05.430035 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:07:05 crc kubenswrapper[4735]: I0128 10:07:05.430631 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:07:35 crc kubenswrapper[4735]: I0128 10:07:35.429984 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:07:35 crc kubenswrapper[4735]: I0128 10:07:35.430494 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:07:48 crc kubenswrapper[4735]: I0128 10:07:48.877857 4735 scope.go:117] "RemoveContainer" containerID="228da2febe52de96cf2c285542085214f4995e28254b27f3b32a7288fffcbda9" Jan 28 10:08:05 crc kubenswrapper[4735]: I0128 10:08:05.429497 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:08:05 crc kubenswrapper[4735]: I0128 10:08:05.430127 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:08:05 crc kubenswrapper[4735]: I0128 10:08:05.430184 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 10:08:05 crc kubenswrapper[4735]: I0128 10:08:05.431365 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b"} pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 10:08:05 crc kubenswrapper[4735]: I0128 10:08:05.431445 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" containerID="cri-o://07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" gracePeriod=600 Jan 28 10:08:05 crc kubenswrapper[4735]: E0128 10:08:05.554991 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:08:06 crc kubenswrapper[4735]: I0128 10:08:06.567318 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" exitCode=0 Jan 28 10:08:06 crc kubenswrapper[4735]: I0128 10:08:06.567365 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerDied","Data":"07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b"} Jan 28 10:08:06 crc kubenswrapper[4735]: I0128 10:08:06.567449 4735 scope.go:117] "RemoveContainer" containerID="c1c45816a1405ea6213a313d884201ef5ccf1af72f7151ce23dc0ce396519b28" Jan 28 10:08:06 crc kubenswrapper[4735]: I0128 10:08:06.571322 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:08:06 crc kubenswrapper[4735]: E0128 10:08:06.572452 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:08:17 crc kubenswrapper[4735]: I0128 10:08:17.513412 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:08:17 crc kubenswrapper[4735]: E0128 10:08:17.514732 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:08:29 crc kubenswrapper[4735]: I0128 10:08:29.503458 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:08:29 crc kubenswrapper[4735]: E0128 10:08:29.505617 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:08:38 crc kubenswrapper[4735]: I0128 10:08:38.482706 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7vl8f"] Jan 28 10:08:38 crc kubenswrapper[4735]: I0128 10:08:38.486285 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7vl8f" Jan 28 10:08:38 crc kubenswrapper[4735]: I0128 10:08:38.499335 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7vl8f"] Jan 28 10:08:38 crc kubenswrapper[4735]: I0128 10:08:38.605029 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54c76400-e257-4b83-a11e-c5aade4987cb-utilities\") pod \"redhat-marketplace-7vl8f\" (UID: \"54c76400-e257-4b83-a11e-c5aade4987cb\") " pod="openshift-marketplace/redhat-marketplace-7vl8f" Jan 28 10:08:38 crc kubenswrapper[4735]: I0128 10:08:38.605130 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzbcr\" (UniqueName: \"kubernetes.io/projected/54c76400-e257-4b83-a11e-c5aade4987cb-kube-api-access-mzbcr\") pod \"redhat-marketplace-7vl8f\" (UID: \"54c76400-e257-4b83-a11e-c5aade4987cb\") " pod="openshift-marketplace/redhat-marketplace-7vl8f" Jan 28 10:08:38 crc kubenswrapper[4735]: I0128 10:08:38.605250 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54c76400-e257-4b83-a11e-c5aade4987cb-catalog-content\") pod \"redhat-marketplace-7vl8f\" (UID: \"54c76400-e257-4b83-a11e-c5aade4987cb\") " pod="openshift-marketplace/redhat-marketplace-7vl8f" Jan 28 10:08:38 crc kubenswrapper[4735]: I0128 10:08:38.707182 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54c76400-e257-4b83-a11e-c5aade4987cb-utilities\") pod \"redhat-marketplace-7vl8f\" (UID: \"54c76400-e257-4b83-a11e-c5aade4987cb\") " pod="openshift-marketplace/redhat-marketplace-7vl8f" Jan 28 10:08:38 crc kubenswrapper[4735]: I0128 10:08:38.707463 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzbcr\" (UniqueName: \"kubernetes.io/projected/54c76400-e257-4b83-a11e-c5aade4987cb-kube-api-access-mzbcr\") pod \"redhat-marketplace-7vl8f\" (UID: \"54c76400-e257-4b83-a11e-c5aade4987cb\") " pod="openshift-marketplace/redhat-marketplace-7vl8f" Jan 28 10:08:38 crc kubenswrapper[4735]: I0128 10:08:38.707546 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54c76400-e257-4b83-a11e-c5aade4987cb-catalog-content\") pod \"redhat-marketplace-7vl8f\" (UID: \"54c76400-e257-4b83-a11e-c5aade4987cb\") " pod="openshift-marketplace/redhat-marketplace-7vl8f" Jan 28 10:08:38 crc kubenswrapper[4735]: I0128 10:08:38.707906 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54c76400-e257-4b83-a11e-c5aade4987cb-utilities\") pod \"redhat-marketplace-7vl8f\" (UID: \"54c76400-e257-4b83-a11e-c5aade4987cb\") " pod="openshift-marketplace/redhat-marketplace-7vl8f" Jan 28 10:08:38 crc kubenswrapper[4735]: I0128 10:08:38.708076 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54c76400-e257-4b83-a11e-c5aade4987cb-catalog-content\") pod \"redhat-marketplace-7vl8f\" (UID: \"54c76400-e257-4b83-a11e-c5aade4987cb\") " pod="openshift-marketplace/redhat-marketplace-7vl8f" Jan 28 10:08:38 crc kubenswrapper[4735]: I0128 10:08:38.727143 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzbcr\" (UniqueName: \"kubernetes.io/projected/54c76400-e257-4b83-a11e-c5aade4987cb-kube-api-access-mzbcr\") pod \"redhat-marketplace-7vl8f\" (UID: \"54c76400-e257-4b83-a11e-c5aade4987cb\") " pod="openshift-marketplace/redhat-marketplace-7vl8f" Jan 28 10:08:38 crc kubenswrapper[4735]: I0128 10:08:38.819243 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7vl8f" Jan 28 10:08:39 crc kubenswrapper[4735]: I0128 10:08:39.309589 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7vl8f"] Jan 28 10:08:39 crc kubenswrapper[4735]: I0128 10:08:39.917372 4735 generic.go:334] "Generic (PLEG): container finished" podID="54c76400-e257-4b83-a11e-c5aade4987cb" containerID="7d6ec7cd60c25187663f91ec4fe2ad4adcf3a79ed8949ff93ad16c442a85991f" exitCode=0 Jan 28 10:08:39 crc kubenswrapper[4735]: I0128 10:08:39.917557 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7vl8f" event={"ID":"54c76400-e257-4b83-a11e-c5aade4987cb","Type":"ContainerDied","Data":"7d6ec7cd60c25187663f91ec4fe2ad4adcf3a79ed8949ff93ad16c442a85991f"} Jan 28 10:08:39 crc kubenswrapper[4735]: I0128 10:08:39.917825 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7vl8f" event={"ID":"54c76400-e257-4b83-a11e-c5aade4987cb","Type":"ContainerStarted","Data":"478388abda99f33cdd8c0d0645c9d38ba8f32138af24d846b9379e1fd941f9d8"} Jan 28 10:08:40 crc kubenswrapper[4735]: I0128 10:08:40.929460 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7vl8f" event={"ID":"54c76400-e257-4b83-a11e-c5aade4987cb","Type":"ContainerStarted","Data":"fd6334ee004e756d15f068ebbffbc1aeb2d9038508626a0bfa50ec3d6cdf5b01"} Jan 28 10:08:41 crc kubenswrapper[4735]: I0128 10:08:41.940834 4735 generic.go:334] "Generic (PLEG): container finished" podID="54c76400-e257-4b83-a11e-c5aade4987cb" containerID="fd6334ee004e756d15f068ebbffbc1aeb2d9038508626a0bfa50ec3d6cdf5b01" exitCode=0 Jan 28 10:08:41 crc kubenswrapper[4735]: I0128 10:08:41.941072 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7vl8f" event={"ID":"54c76400-e257-4b83-a11e-c5aade4987cb","Type":"ContainerDied","Data":"fd6334ee004e756d15f068ebbffbc1aeb2d9038508626a0bfa50ec3d6cdf5b01"} Jan 28 10:08:44 crc kubenswrapper[4735]: I0128 10:08:44.502801 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:08:44 crc kubenswrapper[4735]: E0128 10:08:44.503548 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:08:47 crc kubenswrapper[4735]: I0128 10:08:47.010305 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7vl8f" event={"ID":"54c76400-e257-4b83-a11e-c5aade4987cb","Type":"ContainerStarted","Data":"10a1c3cbe49782fef5cc46a58437836513d1bc42a5f48781c07114088e142a7b"} Jan 28 10:08:47 crc kubenswrapper[4735]: I0128 10:08:47.038956 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7vl8f" podStartSLOduration=2.23211796 podStartE2EDuration="9.038936455s" podCreationTimestamp="2026-01-28 10:08:38 +0000 UTC" firstStartedPulling="2026-01-28 10:08:39.919456745 +0000 UTC m=+1573.069121128" lastFinishedPulling="2026-01-28 10:08:46.72627525 +0000 UTC m=+1579.875939623" observedRunningTime="2026-01-28 10:08:47.033149595 +0000 UTC m=+1580.182813988" watchObservedRunningTime="2026-01-28 10:08:47.038936455 +0000 UTC m=+1580.188600828" Jan 28 10:08:48 crc kubenswrapper[4735]: I0128 10:08:48.820113 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7vl8f" Jan 28 10:08:48 crc kubenswrapper[4735]: I0128 10:08:48.820410 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7vl8f" Jan 28 10:08:48 crc kubenswrapper[4735]: I0128 10:08:48.870045 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7vl8f" Jan 28 10:08:48 crc kubenswrapper[4735]: I0128 10:08:48.986650 4735 scope.go:117] "RemoveContainer" containerID="c3b0479d66f581ff80871df5f49f894e8b2dbbee9103a614b9cbc384aedaa1c7" Jan 28 10:08:49 crc kubenswrapper[4735]: I0128 10:08:49.020955 4735 scope.go:117] "RemoveContainer" containerID="99f414f9ffc9be16ac74924d33c4ed5ff0b9de076a5fa941301d456020d03568" Jan 28 10:08:55 crc kubenswrapper[4735]: I0128 10:08:55.502673 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:08:55 crc kubenswrapper[4735]: E0128 10:08:55.503791 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:08:58 crc kubenswrapper[4735]: I0128 10:08:58.876467 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7vl8f" Jan 28 10:08:58 crc kubenswrapper[4735]: I0128 10:08:58.936698 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7vl8f"] Jan 28 10:08:59 crc kubenswrapper[4735]: I0128 10:08:59.140603 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7vl8f" podUID="54c76400-e257-4b83-a11e-c5aade4987cb" containerName="registry-server" containerID="cri-o://10a1c3cbe49782fef5cc46a58437836513d1bc42a5f48781c07114088e142a7b" gracePeriod=2 Jan 28 10:08:59 crc kubenswrapper[4735]: I0128 10:08:59.601467 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7vl8f" Jan 28 10:08:59 crc kubenswrapper[4735]: I0128 10:08:59.777965 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzbcr\" (UniqueName: \"kubernetes.io/projected/54c76400-e257-4b83-a11e-c5aade4987cb-kube-api-access-mzbcr\") pod \"54c76400-e257-4b83-a11e-c5aade4987cb\" (UID: \"54c76400-e257-4b83-a11e-c5aade4987cb\") " Jan 28 10:08:59 crc kubenswrapper[4735]: I0128 10:08:59.778094 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54c76400-e257-4b83-a11e-c5aade4987cb-utilities\") pod \"54c76400-e257-4b83-a11e-c5aade4987cb\" (UID: \"54c76400-e257-4b83-a11e-c5aade4987cb\") " Jan 28 10:08:59 crc kubenswrapper[4735]: I0128 10:08:59.778216 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54c76400-e257-4b83-a11e-c5aade4987cb-catalog-content\") pod \"54c76400-e257-4b83-a11e-c5aade4987cb\" (UID: \"54c76400-e257-4b83-a11e-c5aade4987cb\") " Jan 28 10:08:59 crc kubenswrapper[4735]: I0128 10:08:59.780037 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54c76400-e257-4b83-a11e-c5aade4987cb-utilities" (OuterVolumeSpecName: "utilities") pod "54c76400-e257-4b83-a11e-c5aade4987cb" (UID: "54c76400-e257-4b83-a11e-c5aade4987cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:08:59 crc kubenswrapper[4735]: I0128 10:08:59.782800 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54c76400-e257-4b83-a11e-c5aade4987cb-kube-api-access-mzbcr" (OuterVolumeSpecName: "kube-api-access-mzbcr") pod "54c76400-e257-4b83-a11e-c5aade4987cb" (UID: "54c76400-e257-4b83-a11e-c5aade4987cb"). InnerVolumeSpecName "kube-api-access-mzbcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:08:59 crc kubenswrapper[4735]: I0128 10:08:59.800640 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54c76400-e257-4b83-a11e-c5aade4987cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54c76400-e257-4b83-a11e-c5aade4987cb" (UID: "54c76400-e257-4b83-a11e-c5aade4987cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:08:59 crc kubenswrapper[4735]: I0128 10:08:59.880569 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzbcr\" (UniqueName: \"kubernetes.io/projected/54c76400-e257-4b83-a11e-c5aade4987cb-kube-api-access-mzbcr\") on node \"crc\" DevicePath \"\"" Jan 28 10:08:59 crc kubenswrapper[4735]: I0128 10:08:59.880603 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54c76400-e257-4b83-a11e-c5aade4987cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:08:59 crc kubenswrapper[4735]: I0128 10:08:59.880614 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54c76400-e257-4b83-a11e-c5aade4987cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:09:00 crc kubenswrapper[4735]: I0128 10:09:00.155988 4735 generic.go:334] "Generic (PLEG): container finished" podID="54c76400-e257-4b83-a11e-c5aade4987cb" containerID="10a1c3cbe49782fef5cc46a58437836513d1bc42a5f48781c07114088e142a7b" exitCode=0 Jan 28 10:09:00 crc kubenswrapper[4735]: I0128 10:09:00.156047 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7vl8f" event={"ID":"54c76400-e257-4b83-a11e-c5aade4987cb","Type":"ContainerDied","Data":"10a1c3cbe49782fef5cc46a58437836513d1bc42a5f48781c07114088e142a7b"} Jan 28 10:09:00 crc kubenswrapper[4735]: I0128 10:09:00.156087 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7vl8f" event={"ID":"54c76400-e257-4b83-a11e-c5aade4987cb","Type":"ContainerDied","Data":"478388abda99f33cdd8c0d0645c9d38ba8f32138af24d846b9379e1fd941f9d8"} Jan 28 10:09:00 crc kubenswrapper[4735]: I0128 10:09:00.156109 4735 scope.go:117] "RemoveContainer" containerID="10a1c3cbe49782fef5cc46a58437836513d1bc42a5f48781c07114088e142a7b" Jan 28 10:09:00 crc kubenswrapper[4735]: I0128 10:09:00.156100 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7vl8f" Jan 28 10:09:00 crc kubenswrapper[4735]: I0128 10:09:00.215845 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7vl8f"] Jan 28 10:09:00 crc kubenswrapper[4735]: I0128 10:09:00.218476 4735 scope.go:117] "RemoveContainer" containerID="fd6334ee004e756d15f068ebbffbc1aeb2d9038508626a0bfa50ec3d6cdf5b01" Jan 28 10:09:00 crc kubenswrapper[4735]: I0128 10:09:00.223871 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7vl8f"] Jan 28 10:09:00 crc kubenswrapper[4735]: I0128 10:09:00.242173 4735 scope.go:117] "RemoveContainer" containerID="7d6ec7cd60c25187663f91ec4fe2ad4adcf3a79ed8949ff93ad16c442a85991f" Jan 28 10:09:00 crc kubenswrapper[4735]: I0128 10:09:00.282628 4735 scope.go:117] "RemoveContainer" containerID="10a1c3cbe49782fef5cc46a58437836513d1bc42a5f48781c07114088e142a7b" Jan 28 10:09:00 crc kubenswrapper[4735]: E0128 10:09:00.284070 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10a1c3cbe49782fef5cc46a58437836513d1bc42a5f48781c07114088e142a7b\": container with ID starting with 10a1c3cbe49782fef5cc46a58437836513d1bc42a5f48781c07114088e142a7b not found: ID does not exist" containerID="10a1c3cbe49782fef5cc46a58437836513d1bc42a5f48781c07114088e142a7b" Jan 28 10:09:00 crc kubenswrapper[4735]: I0128 10:09:00.284108 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10a1c3cbe49782fef5cc46a58437836513d1bc42a5f48781c07114088e142a7b"} err="failed to get container status \"10a1c3cbe49782fef5cc46a58437836513d1bc42a5f48781c07114088e142a7b\": rpc error: code = NotFound desc = could not find container \"10a1c3cbe49782fef5cc46a58437836513d1bc42a5f48781c07114088e142a7b\": container with ID starting with 10a1c3cbe49782fef5cc46a58437836513d1bc42a5f48781c07114088e142a7b not found: ID does not exist" Jan 28 10:09:00 crc kubenswrapper[4735]: I0128 10:09:00.284135 4735 scope.go:117] "RemoveContainer" containerID="fd6334ee004e756d15f068ebbffbc1aeb2d9038508626a0bfa50ec3d6cdf5b01" Jan 28 10:09:00 crc kubenswrapper[4735]: E0128 10:09:00.284474 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd6334ee004e756d15f068ebbffbc1aeb2d9038508626a0bfa50ec3d6cdf5b01\": container with ID starting with fd6334ee004e756d15f068ebbffbc1aeb2d9038508626a0bfa50ec3d6cdf5b01 not found: ID does not exist" containerID="fd6334ee004e756d15f068ebbffbc1aeb2d9038508626a0bfa50ec3d6cdf5b01" Jan 28 10:09:00 crc kubenswrapper[4735]: I0128 10:09:00.284502 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd6334ee004e756d15f068ebbffbc1aeb2d9038508626a0bfa50ec3d6cdf5b01"} err="failed to get container status \"fd6334ee004e756d15f068ebbffbc1aeb2d9038508626a0bfa50ec3d6cdf5b01\": rpc error: code = NotFound desc = could not find container \"fd6334ee004e756d15f068ebbffbc1aeb2d9038508626a0bfa50ec3d6cdf5b01\": container with ID starting with fd6334ee004e756d15f068ebbffbc1aeb2d9038508626a0bfa50ec3d6cdf5b01 not found: ID does not exist" Jan 28 10:09:00 crc kubenswrapper[4735]: I0128 10:09:00.284518 4735 scope.go:117] "RemoveContainer" containerID="7d6ec7cd60c25187663f91ec4fe2ad4adcf3a79ed8949ff93ad16c442a85991f" Jan 28 10:09:00 crc kubenswrapper[4735]: E0128 10:09:00.285868 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d6ec7cd60c25187663f91ec4fe2ad4adcf3a79ed8949ff93ad16c442a85991f\": container with ID starting with 7d6ec7cd60c25187663f91ec4fe2ad4adcf3a79ed8949ff93ad16c442a85991f not found: ID does not exist" containerID="7d6ec7cd60c25187663f91ec4fe2ad4adcf3a79ed8949ff93ad16c442a85991f" Jan 28 10:09:00 crc kubenswrapper[4735]: I0128 10:09:00.285919 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d6ec7cd60c25187663f91ec4fe2ad4adcf3a79ed8949ff93ad16c442a85991f"} err="failed to get container status \"7d6ec7cd60c25187663f91ec4fe2ad4adcf3a79ed8949ff93ad16c442a85991f\": rpc error: code = NotFound desc = could not find container \"7d6ec7cd60c25187663f91ec4fe2ad4adcf3a79ed8949ff93ad16c442a85991f\": container with ID starting with 7d6ec7cd60c25187663f91ec4fe2ad4adcf3a79ed8949ff93ad16c442a85991f not found: ID does not exist" Jan 28 10:09:01 crc kubenswrapper[4735]: I0128 10:09:01.514965 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54c76400-e257-4b83-a11e-c5aade4987cb" path="/var/lib/kubelet/pods/54c76400-e257-4b83-a11e-c5aade4987cb/volumes" Jan 28 10:09:06 crc kubenswrapper[4735]: I0128 10:09:06.503417 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:09:06 crc kubenswrapper[4735]: E0128 10:09:06.504177 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:09:17 crc kubenswrapper[4735]: I0128 10:09:17.513848 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:09:17 crc kubenswrapper[4735]: E0128 10:09:17.514955 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:09:29 crc kubenswrapper[4735]: I0128 10:09:29.459668 4735 generic.go:334] "Generic (PLEG): container finished" podID="7d008e88-e9b6-4c2e-befa-76e9ba1fbadd" containerID="bc18a59897a79b120b9e910d79b51db34fcc014413b7cae1fbf681e69588a42d" exitCode=0 Jan 28 10:09:29 crc kubenswrapper[4735]: I0128 10:09:29.459732 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" event={"ID":"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd","Type":"ContainerDied","Data":"bc18a59897a79b120b9e910d79b51db34fcc014413b7cae1fbf681e69588a42d"} Jan 28 10:09:30 crc kubenswrapper[4735]: I0128 10:09:30.504197 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:09:30 crc kubenswrapper[4735]: E0128 10:09:30.504484 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:09:30 crc kubenswrapper[4735]: I0128 10:09:30.915322 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" Jan 28 10:09:30 crc kubenswrapper[4735]: I0128 10:09:30.994681 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-inventory\") pod \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\" (UID: \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\") " Jan 28 10:09:30 crc kubenswrapper[4735]: I0128 10:09:30.994818 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq527\" (UniqueName: \"kubernetes.io/projected/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-kube-api-access-zq527\") pod \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\" (UID: \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\") " Jan 28 10:09:30 crc kubenswrapper[4735]: I0128 10:09:30.994907 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-bootstrap-combined-ca-bundle\") pod \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\" (UID: \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\") " Jan 28 10:09:30 crc kubenswrapper[4735]: I0128 10:09:30.994989 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-ssh-key-openstack-edpm-ipam\") pod \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\" (UID: \"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd\") " Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.001176 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7d008e88-e9b6-4c2e-befa-76e9ba1fbadd" (UID: "7d008e88-e9b6-4c2e-befa-76e9ba1fbadd"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.004992 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-kube-api-access-zq527" (OuterVolumeSpecName: "kube-api-access-zq527") pod "7d008e88-e9b6-4c2e-befa-76e9ba1fbadd" (UID: "7d008e88-e9b6-4c2e-befa-76e9ba1fbadd"). InnerVolumeSpecName "kube-api-access-zq527". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.024906 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-inventory" (OuterVolumeSpecName: "inventory") pod "7d008e88-e9b6-4c2e-befa-76e9ba1fbadd" (UID: "7d008e88-e9b6-4c2e-befa-76e9ba1fbadd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.025220 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7d008e88-e9b6-4c2e-befa-76e9ba1fbadd" (UID: "7d008e88-e9b6-4c2e-befa-76e9ba1fbadd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.097291 4735 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.097335 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.097349 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.097358 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq527\" (UniqueName: \"kubernetes.io/projected/7d008e88-e9b6-4c2e-befa-76e9ba1fbadd-kube-api-access-zq527\") on node \"crc\" DevicePath \"\"" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.478201 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" event={"ID":"7d008e88-e9b6-4c2e-befa-76e9ba1fbadd","Type":"ContainerDied","Data":"25c9872e70719bbb51c197dc540ab592a3f0c9fadf8b9c1b47125831e9b01ffb"} Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.478520 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25c9872e70719bbb51c197dc540ab592a3f0c9fadf8b9c1b47125831e9b01ffb" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.478271 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.632351 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42"] Jan 28 10:09:31 crc kubenswrapper[4735]: E0128 10:09:31.632737 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d008e88-e9b6-4c2e-befa-76e9ba1fbadd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.632765 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d008e88-e9b6-4c2e-befa-76e9ba1fbadd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 10:09:31 crc kubenswrapper[4735]: E0128 10:09:31.632783 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54c76400-e257-4b83-a11e-c5aade4987cb" containerName="extract-content" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.632788 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="54c76400-e257-4b83-a11e-c5aade4987cb" containerName="extract-content" Jan 28 10:09:31 crc kubenswrapper[4735]: E0128 10:09:31.632807 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54c76400-e257-4b83-a11e-c5aade4987cb" containerName="extract-utilities" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.632813 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="54c76400-e257-4b83-a11e-c5aade4987cb" containerName="extract-utilities" Jan 28 10:09:31 crc kubenswrapper[4735]: E0128 10:09:31.632826 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54c76400-e257-4b83-a11e-c5aade4987cb" containerName="registry-server" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.632833 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="54c76400-e257-4b83-a11e-c5aade4987cb" containerName="registry-server" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.632998 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d008e88-e9b6-4c2e-befa-76e9ba1fbadd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.633015 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="54c76400-e257-4b83-a11e-c5aade4987cb" containerName="registry-server" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.633581 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.635659 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.635909 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.636026 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.636156 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j996r" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.643953 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42"] Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.709515 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6kf42\" (UID: \"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.709578 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6kf42\" (UID: \"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.709737 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrbxn\" (UniqueName: \"kubernetes.io/projected/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-kube-api-access-vrbxn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6kf42\" (UID: \"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.811347 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6kf42\" (UID: \"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.811397 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6kf42\" (UID: \"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.811437 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrbxn\" (UniqueName: \"kubernetes.io/projected/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-kube-api-access-vrbxn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6kf42\" (UID: \"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.816211 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6kf42\" (UID: \"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.816339 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6kf42\" (UID: \"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.828270 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrbxn\" (UniqueName: \"kubernetes.io/projected/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-kube-api-access-vrbxn\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6kf42\" (UID: \"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" Jan 28 10:09:31 crc kubenswrapper[4735]: I0128 10:09:31.951949 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" Jan 28 10:09:32 crc kubenswrapper[4735]: I0128 10:09:32.457954 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42"] Jan 28 10:09:32 crc kubenswrapper[4735]: I0128 10:09:32.488330 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" event={"ID":"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1","Type":"ContainerStarted","Data":"a1a1a48b0a5de275106ddea03309beaaf270b899ff858cb4169be1bae983a268"} Jan 28 10:09:33 crc kubenswrapper[4735]: I0128 10:09:33.529577 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" event={"ID":"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1","Type":"ContainerStarted","Data":"0735126d3ace977103394dd6824bbc71599ffcbc0dd573637457951c067a71b8"} Jan 28 10:09:33 crc kubenswrapper[4735]: I0128 10:09:33.536930 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" podStartSLOduration=2.118616592 podStartE2EDuration="2.536912102s" podCreationTimestamp="2026-01-28 10:09:31 +0000 UTC" firstStartedPulling="2026-01-28 10:09:32.467931337 +0000 UTC m=+1625.617595710" lastFinishedPulling="2026-01-28 10:09:32.886226847 +0000 UTC m=+1626.035891220" observedRunningTime="2026-01-28 10:09:33.527288038 +0000 UTC m=+1626.676952411" watchObservedRunningTime="2026-01-28 10:09:33.536912102 +0000 UTC m=+1626.686576465" Jan 28 10:09:41 crc kubenswrapper[4735]: I0128 10:09:41.502294 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:09:41 crc kubenswrapper[4735]: E0128 10:09:41.504062 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:09:49 crc kubenswrapper[4735]: I0128 10:09:49.073613 4735 scope.go:117] "RemoveContainer" containerID="8848dd821273fe12461deb8bbcdead9b3f24ce8e0cbf3a27ab822eb026d60b17" Jan 28 10:09:49 crc kubenswrapper[4735]: I0128 10:09:49.098398 4735 scope.go:117] "RemoveContainer" containerID="52deaa193137e26997792b53653a7037ec145f886eec81a6d4e38d3cc5e704bd" Jan 28 10:09:49 crc kubenswrapper[4735]: I0128 10:09:49.124227 4735 scope.go:117] "RemoveContainer" containerID="781dc387102b10116d7707e33368895dbfd6bb72489bae195dc3ed1d6d725346" Jan 28 10:09:49 crc kubenswrapper[4735]: I0128 10:09:49.151310 4735 scope.go:117] "RemoveContainer" containerID="dec6ffdd43f7c0945a00ff0be119826b2bed3b84e45d7c63c867bc5bc194e70b" Jan 28 10:09:49 crc kubenswrapper[4735]: I0128 10:09:49.178114 4735 scope.go:117] "RemoveContainer" containerID="b500d99cc28a4c58293b28d6abcfdf7fb0030ec6f9af23baf06629b1af514002" Jan 28 10:09:49 crc kubenswrapper[4735]: I0128 10:09:49.198451 4735 scope.go:117] "RemoveContainer" containerID="8fbcf9ffd056f023f261821810435ceaa4a202ef8db46cdb13cdedf8dbc5a600" Jan 28 10:09:49 crc kubenswrapper[4735]: I0128 10:09:49.218683 4735 scope.go:117] "RemoveContainer" containerID="f9f040f050b85c3d4583578eeca404069f0c2d0a167cbd396f3915e04f167fc0" Jan 28 10:09:56 crc kubenswrapper[4735]: I0128 10:09:56.502798 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:09:56 crc kubenswrapper[4735]: E0128 10:09:56.503593 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:10:08 crc kubenswrapper[4735]: I0128 10:10:08.056735 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5c01-account-create-update-6dxpf"] Jan 28 10:10:08 crc kubenswrapper[4735]: I0128 10:10:08.076216 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-jn5hv"] Jan 28 10:10:08 crc kubenswrapper[4735]: I0128 10:10:08.097244 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5c01-account-create-update-6dxpf"] Jan 28 10:10:08 crc kubenswrapper[4735]: I0128 10:10:08.107578 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-nxql9"] Jan 28 10:10:08 crc kubenswrapper[4735]: I0128 10:10:08.115954 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-jn5hv"] Jan 28 10:10:08 crc kubenswrapper[4735]: I0128 10:10:08.131422 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-nxql9"] Jan 28 10:10:08 crc kubenswrapper[4735]: I0128 10:10:08.503231 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:10:08 crc kubenswrapper[4735]: E0128 10:10:08.503616 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:10:09 crc kubenswrapper[4735]: I0128 10:10:09.036552 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-da82-account-create-update-wcmq6"] Jan 28 10:10:09 crc kubenswrapper[4735]: I0128 10:10:09.046061 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-438a-account-create-update-c6phc"] Jan 28 10:10:09 crc kubenswrapper[4735]: I0128 10:10:09.054201 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-2vzq6"] Jan 28 10:10:09 crc kubenswrapper[4735]: I0128 10:10:09.062425 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-438a-account-create-update-c6phc"] Jan 28 10:10:09 crc kubenswrapper[4735]: I0128 10:10:09.070630 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-da82-account-create-update-wcmq6"] Jan 28 10:10:09 crc kubenswrapper[4735]: I0128 10:10:09.078380 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-2vzq6"] Jan 28 10:10:09 crc kubenswrapper[4735]: I0128 10:10:09.514014 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1572001c-3eab-45ec-a976-6d5bac5fd3ad" path="/var/lib/kubelet/pods/1572001c-3eab-45ec-a976-6d5bac5fd3ad/volumes" Jan 28 10:10:09 crc kubenswrapper[4735]: I0128 10:10:09.514830 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345" path="/var/lib/kubelet/pods/3d7385fe-f4e9-4fab-b1b2-0ebf60fa9345/volumes" Jan 28 10:10:09 crc kubenswrapper[4735]: I0128 10:10:09.515334 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ef60663-8c16-49a5-84ad-2d59c0972f72" path="/var/lib/kubelet/pods/5ef60663-8c16-49a5-84ad-2d59c0972f72/volumes" Jan 28 10:10:09 crc kubenswrapper[4735]: I0128 10:10:09.515837 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="794d2867-cc06-44ad-82a0-778faaaa98f3" path="/var/lib/kubelet/pods/794d2867-cc06-44ad-82a0-778faaaa98f3/volumes" Jan 28 10:10:09 crc kubenswrapper[4735]: I0128 10:10:09.516776 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84ab8ffc-62a4-4091-abd1-25082ecf0de9" path="/var/lib/kubelet/pods/84ab8ffc-62a4-4091-abd1-25082ecf0de9/volumes" Jan 28 10:10:09 crc kubenswrapper[4735]: I0128 10:10:09.517257 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0a76ea5-21ec-4ee3-b611-2cbf66deb576" path="/var/lib/kubelet/pods/e0a76ea5-21ec-4ee3-b611-2cbf66deb576/volumes" Jan 28 10:10:22 crc kubenswrapper[4735]: I0128 10:10:22.504260 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:10:22 crc kubenswrapper[4735]: E0128 10:10:22.505334 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:10:33 crc kubenswrapper[4735]: I0128 10:10:33.066920 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-82wv7"] Jan 28 10:10:33 crc kubenswrapper[4735]: I0128 10:10:33.075902 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-82wv7"] Jan 28 10:10:33 crc kubenswrapper[4735]: I0128 10:10:33.515827 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db0a104c-3dab-4e49-b854-3aea57352423" path="/var/lib/kubelet/pods/db0a104c-3dab-4e49-b854-3aea57352423/volumes" Jan 28 10:10:36 crc kubenswrapper[4735]: I0128 10:10:36.502545 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:10:36 crc kubenswrapper[4735]: E0128 10:10:36.503143 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:10:38 crc kubenswrapper[4735]: I0128 10:10:38.029139 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-dfks4"] Jan 28 10:10:38 crc kubenswrapper[4735]: I0128 10:10:38.037111 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-dfks4"] Jan 28 10:10:39 crc kubenswrapper[4735]: I0128 10:10:39.517354 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fcc5331-6d94-4b66-9e5b-2bb8bced49dd" path="/var/lib/kubelet/pods/5fcc5331-6d94-4b66-9e5b-2bb8bced49dd/volumes" Jan 28 10:10:45 crc kubenswrapper[4735]: I0128 10:10:45.037705 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-l9h9d"] Jan 28 10:10:45 crc kubenswrapper[4735]: I0128 10:10:45.048069 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-bpqdm"] Jan 28 10:10:45 crc kubenswrapper[4735]: I0128 10:10:45.058955 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-l9h9d"] Jan 28 10:10:45 crc kubenswrapper[4735]: I0128 10:10:45.067827 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-bpqdm"] Jan 28 10:10:45 crc kubenswrapper[4735]: I0128 10:10:45.075707 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-41da-account-create-update-6w5pq"] Jan 28 10:10:45 crc kubenswrapper[4735]: I0128 10:10:45.086548 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-41da-account-create-update-6w5pq"] Jan 28 10:10:45 crc kubenswrapper[4735]: I0128 10:10:45.515258 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b3ba296-5623-443d-9349-61c27620d525" path="/var/lib/kubelet/pods/2b3ba296-5623-443d-9349-61c27620d525/volumes" Jan 28 10:10:45 crc kubenswrapper[4735]: I0128 10:10:45.516190 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e71de2a-5efc-430d-bdff-7480ec0f7465" path="/var/lib/kubelet/pods/3e71de2a-5efc-430d-bdff-7480ec0f7465/volumes" Jan 28 10:10:45 crc kubenswrapper[4735]: I0128 10:10:45.516999 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cb508b8-a325-46e1-9ddb-1ce941cd8d48" path="/var/lib/kubelet/pods/9cb508b8-a325-46e1-9ddb-1ce941cd8d48/volumes" Jan 28 10:10:48 crc kubenswrapper[4735]: I0128 10:10:48.044176 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-djphf"] Jan 28 10:10:48 crc kubenswrapper[4735]: I0128 10:10:48.056304 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2d62-account-create-update-kh76t"] Jan 28 10:10:48 crc kubenswrapper[4735]: I0128 10:10:48.066536 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-a22f-account-create-update-5jl8s"] Jan 28 10:10:48 crc kubenswrapper[4735]: I0128 10:10:48.073315 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-djphf"] Jan 28 10:10:48 crc kubenswrapper[4735]: I0128 10:10:48.080179 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-2d62-account-create-update-kh76t"] Jan 28 10:10:48 crc kubenswrapper[4735]: I0128 10:10:48.087863 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-a22f-account-create-update-5jl8s"] Jan 28 10:10:49 crc kubenswrapper[4735]: I0128 10:10:49.320919 4735 scope.go:117] "RemoveContainer" containerID="9597e9d97e886aad3e5d76c23f7970dc5626550e01d83d5b77bdffd0357d3aad" Jan 28 10:10:49 crc kubenswrapper[4735]: I0128 10:10:49.354054 4735 scope.go:117] "RemoveContainer" containerID="3e1de117adec77fcdc3e6f93df39d57d236fa661ddb74f58855da429ef323836" Jan 28 10:10:49 crc kubenswrapper[4735]: I0128 10:10:49.393583 4735 scope.go:117] "RemoveContainer" containerID="61e47de233e0f4cd2f29c46baeac05d93a4e2c926bb585a8fa7e15b3427378b1" Jan 28 10:10:49 crc kubenswrapper[4735]: I0128 10:10:49.444537 4735 scope.go:117] "RemoveContainer" containerID="497ab24f85b5899a2648bbde5d46258506f7c920dd34e35cee162159bab75718" Jan 28 10:10:49 crc kubenswrapper[4735]: I0128 10:10:49.478065 4735 scope.go:117] "RemoveContainer" containerID="9df4d0227611fef25d1dcc9f51a25ac51563c6c5cb6a6dc814adf6d60795a305" Jan 28 10:10:49 crc kubenswrapper[4735]: I0128 10:10:49.519348 4735 scope.go:117] "RemoveContainer" containerID="a8dac7c19ff73dced448db8cf9a81e51cea1bafd2bff8c7dbb8287e3a1f5b880" Jan 28 10:10:49 crc kubenswrapper[4735]: I0128 10:10:49.526341 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eeb0ff9-4319-4e0b-82d8-07d523060b9f" path="/var/lib/kubelet/pods/9eeb0ff9-4319-4e0b-82d8-07d523060b9f/volumes" Jan 28 10:10:49 crc kubenswrapper[4735]: I0128 10:10:49.527860 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a278d191-6be1-4cc8-98a1-c78ecf19c1b8" path="/var/lib/kubelet/pods/a278d191-6be1-4cc8-98a1-c78ecf19c1b8/volumes" Jan 28 10:10:49 crc kubenswrapper[4735]: I0128 10:10:49.529125 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d65a3154-7da4-4c99-b8d1-e3015e6fd1fe" path="/var/lib/kubelet/pods/d65a3154-7da4-4c99-b8d1-e3015e6fd1fe/volumes" Jan 28 10:10:49 crc kubenswrapper[4735]: I0128 10:10:49.559587 4735 scope.go:117] "RemoveContainer" containerID="8a4a785933196133458b96110a7e874bafa8139105a869a3d9d6061a3240c8bb" Jan 28 10:10:49 crc kubenswrapper[4735]: I0128 10:10:49.584243 4735 scope.go:117] "RemoveContainer" containerID="6f6a0e5d2e5c14a777885091632b2f1364b8221a51adf31f944b79deaf31fd4a" Jan 28 10:10:49 crc kubenswrapper[4735]: I0128 10:10:49.622194 4735 scope.go:117] "RemoveContainer" containerID="21cc0ac2bc68b13798fc3f6ca713b1a7b1a409eb82d53c526de71b0aac5f828b" Jan 28 10:10:49 crc kubenswrapper[4735]: I0128 10:10:49.652348 4735 scope.go:117] "RemoveContainer" containerID="b9c46a777f818814f0497bbac3d769a44d8ac5d4695015f4852069a0a7773d23" Jan 28 10:10:49 crc kubenswrapper[4735]: I0128 10:10:49.670719 4735 scope.go:117] "RemoveContainer" containerID="f364ec64471274d14a24cf866ae96a566a6f602c1ac7640271c80a37ed80819b" Jan 28 10:10:50 crc kubenswrapper[4735]: I0128 10:10:50.503014 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:10:50 crc kubenswrapper[4735]: E0128 10:10:50.503798 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:10:53 crc kubenswrapper[4735]: I0128 10:10:53.039303 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-c2m26"] Jan 28 10:10:53 crc kubenswrapper[4735]: I0128 10:10:53.048854 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-c2m26"] Jan 28 10:10:53 crc kubenswrapper[4735]: I0128 10:10:53.519433 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da69c572-110e-4caf-aa51-a86700885cb5" path="/var/lib/kubelet/pods/da69c572-110e-4caf-aa51-a86700885cb5/volumes" Jan 28 10:11:01 crc kubenswrapper[4735]: I0128 10:11:01.502805 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:11:01 crc kubenswrapper[4735]: E0128 10:11:01.503465 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:11:12 crc kubenswrapper[4735]: I0128 10:11:12.503577 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:11:12 crc kubenswrapper[4735]: E0128 10:11:12.504388 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:11:13 crc kubenswrapper[4735]: I0128 10:11:13.490606 4735 generic.go:334] "Generic (PLEG): container finished" podID="cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1" containerID="0735126d3ace977103394dd6824bbc71599ffcbc0dd573637457951c067a71b8" exitCode=0 Jan 28 10:11:13 crc kubenswrapper[4735]: I0128 10:11:13.490934 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" event={"ID":"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1","Type":"ContainerDied","Data":"0735126d3ace977103394dd6824bbc71599ffcbc0dd573637457951c067a71b8"} Jan 28 10:11:14 crc kubenswrapper[4735]: I0128 10:11:14.955235 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.069589 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-ssh-key-openstack-edpm-ipam\") pod \"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1\" (UID: \"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1\") " Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.070248 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrbxn\" (UniqueName: \"kubernetes.io/projected/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-kube-api-access-vrbxn\") pod \"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1\" (UID: \"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1\") " Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.070314 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-inventory\") pod \"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1\" (UID: \"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1\") " Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.074732 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-kube-api-access-vrbxn" (OuterVolumeSpecName: "kube-api-access-vrbxn") pod "cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1" (UID: "cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1"). InnerVolumeSpecName "kube-api-access-vrbxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.097213 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1" (UID: "cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.103390 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-inventory" (OuterVolumeSpecName: "inventory") pod "cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1" (UID: "cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.172416 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrbxn\" (UniqueName: \"kubernetes.io/projected/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-kube-api-access-vrbxn\") on node \"crc\" DevicePath \"\"" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.172453 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.172463 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.511324 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.513989 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6kf42" event={"ID":"cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1","Type":"ContainerDied","Data":"a1a1a48b0a5de275106ddea03309beaaf270b899ff858cb4169be1bae983a268"} Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.514072 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1a1a48b0a5de275106ddea03309beaaf270b899ff858cb4169be1bae983a268" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.609997 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s"] Jan 28 10:11:15 crc kubenswrapper[4735]: E0128 10:11:15.610516 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.610534 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.610710 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.611339 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.613283 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j996r" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.613462 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.613678 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.613858 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.622213 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s"] Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.683210 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d981f997-629e-4604-bd7a-4ae05efbeb3f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vp29s\" (UID: \"d981f997-629e-4604-bd7a-4ae05efbeb3f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.683558 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d981f997-629e-4604-bd7a-4ae05efbeb3f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vp29s\" (UID: \"d981f997-629e-4604-bd7a-4ae05efbeb3f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.684314 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-692vz\" (UniqueName: \"kubernetes.io/projected/d981f997-629e-4604-bd7a-4ae05efbeb3f-kube-api-access-692vz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vp29s\" (UID: \"d981f997-629e-4604-bd7a-4ae05efbeb3f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.787075 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d981f997-629e-4604-bd7a-4ae05efbeb3f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vp29s\" (UID: \"d981f997-629e-4604-bd7a-4ae05efbeb3f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.787147 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d981f997-629e-4604-bd7a-4ae05efbeb3f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vp29s\" (UID: \"d981f997-629e-4604-bd7a-4ae05efbeb3f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.787254 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-692vz\" (UniqueName: \"kubernetes.io/projected/d981f997-629e-4604-bd7a-4ae05efbeb3f-kube-api-access-692vz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vp29s\" (UID: \"d981f997-629e-4604-bd7a-4ae05efbeb3f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.791911 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d981f997-629e-4604-bd7a-4ae05efbeb3f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vp29s\" (UID: \"d981f997-629e-4604-bd7a-4ae05efbeb3f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.792147 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d981f997-629e-4604-bd7a-4ae05efbeb3f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vp29s\" (UID: \"d981f997-629e-4604-bd7a-4ae05efbeb3f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.819385 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-692vz\" (UniqueName: \"kubernetes.io/projected/d981f997-629e-4604-bd7a-4ae05efbeb3f-kube-api-access-692vz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vp29s\" (UID: \"d981f997-629e-4604-bd7a-4ae05efbeb3f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" Jan 28 10:11:15 crc kubenswrapper[4735]: I0128 10:11:15.933770 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" Jan 28 10:11:16 crc kubenswrapper[4735]: I0128 10:11:16.586422 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s"] Jan 28 10:11:16 crc kubenswrapper[4735]: I0128 10:11:16.593032 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 10:11:17 crc kubenswrapper[4735]: I0128 10:11:17.540741 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" event={"ID":"d981f997-629e-4604-bd7a-4ae05efbeb3f","Type":"ContainerStarted","Data":"d641ad72a6996fc49cba3e85ab66c7bf1e2808b0b748482b06cad637567e3700"} Jan 28 10:11:17 crc kubenswrapper[4735]: I0128 10:11:17.541424 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" event={"ID":"d981f997-629e-4604-bd7a-4ae05efbeb3f","Type":"ContainerStarted","Data":"3db61d827b3554bde0575c2f58043ba73e8dcd593da693157488d4a2510ea309"} Jan 28 10:11:17 crc kubenswrapper[4735]: I0128 10:11:17.564591 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" podStartSLOduration=2.095838862 podStartE2EDuration="2.564549223s" podCreationTimestamp="2026-01-28 10:11:15 +0000 UTC" firstStartedPulling="2026-01-28 10:11:16.592694325 +0000 UTC m=+1729.742358698" lastFinishedPulling="2026-01-28 10:11:17.061404666 +0000 UTC m=+1730.211069059" observedRunningTime="2026-01-28 10:11:17.556491707 +0000 UTC m=+1730.706156090" watchObservedRunningTime="2026-01-28 10:11:17.564549223 +0000 UTC m=+1730.714213596" Jan 28 10:11:24 crc kubenswrapper[4735]: I0128 10:11:24.502436 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:11:24 crc kubenswrapper[4735]: E0128 10:11:24.503392 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:11:25 crc kubenswrapper[4735]: I0128 10:11:25.054671 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-5ghbh"] Jan 28 10:11:25 crc kubenswrapper[4735]: I0128 10:11:25.069599 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-5ghbh"] Jan 28 10:11:25 crc kubenswrapper[4735]: I0128 10:11:25.517282 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0e8dd26-845d-43ff-bc2d-fd6f4808736f" path="/var/lib/kubelet/pods/c0e8dd26-845d-43ff-bc2d-fd6f4808736f/volumes" Jan 28 10:11:31 crc kubenswrapper[4735]: I0128 10:11:31.050570 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-v5th8"] Jan 28 10:11:31 crc kubenswrapper[4735]: I0128 10:11:31.063314 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-v5th8"] Jan 28 10:11:31 crc kubenswrapper[4735]: I0128 10:11:31.516578 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1891b5eb-afe2-4e76-b3bd-2a141a784a13" path="/var/lib/kubelet/pods/1891b5eb-afe2-4e76-b3bd-2a141a784a13/volumes" Jan 28 10:11:35 crc kubenswrapper[4735]: I0128 10:11:35.503182 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:11:35 crc kubenswrapper[4735]: E0128 10:11:35.503852 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:11:38 crc kubenswrapper[4735]: I0128 10:11:38.035214 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2vhll"] Jan 28 10:11:38 crc kubenswrapper[4735]: I0128 10:11:38.048441 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2vhll"] Jan 28 10:11:39 crc kubenswrapper[4735]: I0128 10:11:39.519071 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7679ac38-8750-48e8-9f60-0c7214eef680" path="/var/lib/kubelet/pods/7679ac38-8750-48e8-9f60-0c7214eef680/volumes" Jan 28 10:11:48 crc kubenswrapper[4735]: I0128 10:11:48.043790 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-khsht"] Jan 28 10:11:48 crc kubenswrapper[4735]: I0128 10:11:48.067820 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-jn5j5"] Jan 28 10:11:48 crc kubenswrapper[4735]: I0128 10:11:48.079451 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-khsht"] Jan 28 10:11:48 crc kubenswrapper[4735]: I0128 10:11:48.088627 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-jn5j5"] Jan 28 10:11:49 crc kubenswrapper[4735]: I0128 10:11:49.502805 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:11:49 crc kubenswrapper[4735]: E0128 10:11:49.503293 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:11:49 crc kubenswrapper[4735]: I0128 10:11:49.519075 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c" path="/var/lib/kubelet/pods/1eab452a-3efa-4f9f-a03f-e0c6a3d8bd6c/volumes" Jan 28 10:11:49 crc kubenswrapper[4735]: I0128 10:11:49.520556 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abbfe5cf-00af-4a6b-921e-401f549a7c8e" path="/var/lib/kubelet/pods/abbfe5cf-00af-4a6b-921e-401f549a7c8e/volumes" Jan 28 10:11:49 crc kubenswrapper[4735]: I0128 10:11:49.927601 4735 scope.go:117] "RemoveContainer" containerID="095b46f485573009f0e56308bdf1435113400c91317b028dd131db7a9ab55cf9" Jan 28 10:11:49 crc kubenswrapper[4735]: I0128 10:11:49.956425 4735 scope.go:117] "RemoveContainer" containerID="e0fe4545364e8c14510c4a06bd6dc6181f116bec80d6a79a9b06d0fbdab77f8f" Jan 28 10:11:49 crc kubenswrapper[4735]: I0128 10:11:49.994287 4735 scope.go:117] "RemoveContainer" containerID="8ea9d4e3ac4ed6ad2be647d9884e043a1f86a34d98621f183546e60556791092" Jan 28 10:11:50 crc kubenswrapper[4735]: I0128 10:11:50.043880 4735 scope.go:117] "RemoveContainer" containerID="ea3ddeb2c23eb87f52d1fe471220fe6e52abda2a3f03d74c722b495041206d49" Jan 28 10:11:50 crc kubenswrapper[4735]: I0128 10:11:50.113019 4735 scope.go:117] "RemoveContainer" containerID="59818cedee9a7d9e36eb957d4d37c122583db2a0ea724fe50d39cfc2ed0850fe" Jan 28 10:11:50 crc kubenswrapper[4735]: I0128 10:11:50.130151 4735 scope.go:117] "RemoveContainer" containerID="a23b8db2f1361e0c48bf618e9f5e9d49018d6962c921fd0a24b64339635e1431" Jan 28 10:11:50 crc kubenswrapper[4735]: I0128 10:11:50.187078 4735 scope.go:117] "RemoveContainer" containerID="8b4062ed4118495b02a20bbaaf78b310a0147dcb1a89db8ea8b4bdc9bef56afb" Jan 28 10:11:50 crc kubenswrapper[4735]: I0128 10:11:50.231759 4735 scope.go:117] "RemoveContainer" containerID="125154641b1da1358a3b6cbd37cda051a4312479fa1f779cff3c89ea80066db8" Jan 28 10:11:50 crc kubenswrapper[4735]: I0128 10:11:50.263718 4735 scope.go:117] "RemoveContainer" containerID="06d5cf8ada5c991ca25c70487193cb8739e17d60959bd4cddb2596ece2c16006" Jan 28 10:12:01 crc kubenswrapper[4735]: I0128 10:12:01.502975 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:12:01 crc kubenswrapper[4735]: E0128 10:12:01.504020 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:12:13 crc kubenswrapper[4735]: I0128 10:12:13.504089 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:12:13 crc kubenswrapper[4735]: E0128 10:12:13.505127 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:12:27 crc kubenswrapper[4735]: I0128 10:12:27.509432 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:12:27 crc kubenswrapper[4735]: E0128 10:12:27.510218 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:12:37 crc kubenswrapper[4735]: I0128 10:12:37.046580 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dnxzq"] Jan 28 10:12:37 crc kubenswrapper[4735]: I0128 10:12:37.055166 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dnxzq"] Jan 28 10:12:37 crc kubenswrapper[4735]: I0128 10:12:37.514278 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dedf2e75-1cf8-45be-984b-0fec5c06081d" path="/var/lib/kubelet/pods/dedf2e75-1cf8-45be-984b-0fec5c06081d/volumes" Jan 28 10:12:38 crc kubenswrapper[4735]: I0128 10:12:38.503891 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:12:38 crc kubenswrapper[4735]: E0128 10:12:38.504132 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:12:39 crc kubenswrapper[4735]: I0128 10:12:39.046165 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-6cb2-account-create-update-x2mkl"] Jan 28 10:12:39 crc kubenswrapper[4735]: I0128 10:12:39.060589 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-6cb2-account-create-update-x2mkl"] Jan 28 10:12:39 crc kubenswrapper[4735]: I0128 10:12:39.071279 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-dacf-account-create-update-2vtrf"] Jan 28 10:12:39 crc kubenswrapper[4735]: I0128 10:12:39.079289 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9b49-account-create-update-xdbbq"] Jan 28 10:12:39 crc kubenswrapper[4735]: I0128 10:12:39.087493 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9b49-account-create-update-xdbbq"] Jan 28 10:12:39 crc kubenswrapper[4735]: I0128 10:12:39.096021 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-dacf-account-create-update-2vtrf"] Jan 28 10:12:39 crc kubenswrapper[4735]: I0128 10:12:39.106386 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-lshmj"] Jan 28 10:12:39 crc kubenswrapper[4735]: I0128 10:12:39.116496 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-lshmj"] Jan 28 10:12:39 crc kubenswrapper[4735]: I0128 10:12:39.127379 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-wrv9l"] Jan 28 10:12:39 crc kubenswrapper[4735]: I0128 10:12:39.135393 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-wrv9l"] Jan 28 10:12:39 crc kubenswrapper[4735]: I0128 10:12:39.515147 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15c0163a-15b4-41e5-ba99-c1d296c31c34" path="/var/lib/kubelet/pods/15c0163a-15b4-41e5-ba99-c1d296c31c34/volumes" Jan 28 10:12:39 crc kubenswrapper[4735]: I0128 10:12:39.516170 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bc3d821-db59-43d8-8de6-290509114a4a" path="/var/lib/kubelet/pods/3bc3d821-db59-43d8-8de6-290509114a4a/volumes" Jan 28 10:12:39 crc kubenswrapper[4735]: I0128 10:12:39.516801 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4563dc5a-d5e3-41f7-a719-7eb1457bb219" path="/var/lib/kubelet/pods/4563dc5a-d5e3-41f7-a719-7eb1457bb219/volumes" Jan 28 10:12:39 crc kubenswrapper[4735]: I0128 10:12:39.517331 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d63f54a-6505-499c-88db-dd3b04968076" path="/var/lib/kubelet/pods/8d63f54a-6505-499c-88db-dd3b04968076/volumes" Jan 28 10:12:39 crc kubenswrapper[4735]: I0128 10:12:39.518420 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9f098d7-b30e-4612-aeaf-522f2706b053" path="/var/lib/kubelet/pods/b9f098d7-b30e-4612-aeaf-522f2706b053/volumes" Jan 28 10:12:40 crc kubenswrapper[4735]: I0128 10:12:40.339481 4735 generic.go:334] "Generic (PLEG): container finished" podID="d981f997-629e-4604-bd7a-4ae05efbeb3f" containerID="d641ad72a6996fc49cba3e85ab66c7bf1e2808b0b748482b06cad637567e3700" exitCode=0 Jan 28 10:12:40 crc kubenswrapper[4735]: I0128 10:12:40.339529 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" event={"ID":"d981f997-629e-4604-bd7a-4ae05efbeb3f","Type":"ContainerDied","Data":"d641ad72a6996fc49cba3e85ab66c7bf1e2808b0b748482b06cad637567e3700"} Jan 28 10:12:41 crc kubenswrapper[4735]: I0128 10:12:41.762978 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" Jan 28 10:12:41 crc kubenswrapper[4735]: I0128 10:12:41.839072 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d981f997-629e-4604-bd7a-4ae05efbeb3f-inventory\") pod \"d981f997-629e-4604-bd7a-4ae05efbeb3f\" (UID: \"d981f997-629e-4604-bd7a-4ae05efbeb3f\") " Jan 28 10:12:41 crc kubenswrapper[4735]: I0128 10:12:41.839485 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d981f997-629e-4604-bd7a-4ae05efbeb3f-ssh-key-openstack-edpm-ipam\") pod \"d981f997-629e-4604-bd7a-4ae05efbeb3f\" (UID: \"d981f997-629e-4604-bd7a-4ae05efbeb3f\") " Jan 28 10:12:41 crc kubenswrapper[4735]: I0128 10:12:41.839594 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-692vz\" (UniqueName: \"kubernetes.io/projected/d981f997-629e-4604-bd7a-4ae05efbeb3f-kube-api-access-692vz\") pod \"d981f997-629e-4604-bd7a-4ae05efbeb3f\" (UID: \"d981f997-629e-4604-bd7a-4ae05efbeb3f\") " Jan 28 10:12:41 crc kubenswrapper[4735]: I0128 10:12:41.846030 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d981f997-629e-4604-bd7a-4ae05efbeb3f-kube-api-access-692vz" (OuterVolumeSpecName: "kube-api-access-692vz") pod "d981f997-629e-4604-bd7a-4ae05efbeb3f" (UID: "d981f997-629e-4604-bd7a-4ae05efbeb3f"). InnerVolumeSpecName "kube-api-access-692vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:12:41 crc kubenswrapper[4735]: I0128 10:12:41.873547 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d981f997-629e-4604-bd7a-4ae05efbeb3f-inventory" (OuterVolumeSpecName: "inventory") pod "d981f997-629e-4604-bd7a-4ae05efbeb3f" (UID: "d981f997-629e-4604-bd7a-4ae05efbeb3f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:12:41 crc kubenswrapper[4735]: I0128 10:12:41.874375 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d981f997-629e-4604-bd7a-4ae05efbeb3f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d981f997-629e-4604-bd7a-4ae05efbeb3f" (UID: "d981f997-629e-4604-bd7a-4ae05efbeb3f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:12:41 crc kubenswrapper[4735]: I0128 10:12:41.941417 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d981f997-629e-4604-bd7a-4ae05efbeb3f-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 10:12:41 crc kubenswrapper[4735]: I0128 10:12:41.941476 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d981f997-629e-4604-bd7a-4ae05efbeb3f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:12:41 crc kubenswrapper[4735]: I0128 10:12:41.941488 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-692vz\" (UniqueName: \"kubernetes.io/projected/d981f997-629e-4604-bd7a-4ae05efbeb3f-kube-api-access-692vz\") on node \"crc\" DevicePath \"\"" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.358969 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" event={"ID":"d981f997-629e-4604-bd7a-4ae05efbeb3f","Type":"ContainerDied","Data":"3db61d827b3554bde0575c2f58043ba73e8dcd593da693157488d4a2510ea309"} Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.359013 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3db61d827b3554bde0575c2f58043ba73e8dcd593da693157488d4a2510ea309" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.359100 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vp29s" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.442620 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8"] Jan 28 10:12:42 crc kubenswrapper[4735]: E0128 10:12:42.443085 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d981f997-629e-4604-bd7a-4ae05efbeb3f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.443109 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="d981f997-629e-4604-bd7a-4ae05efbeb3f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.443403 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="d981f997-629e-4604-bd7a-4ae05efbeb3f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.444212 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.447456 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j996r" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.447663 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.448978 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.450634 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.453231 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8"] Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.551369 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eca0937f-f5d2-4682-9ac0-92041c4fac82-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8\" (UID: \"eca0937f-f5d2-4682-9ac0-92041c4fac82\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.551615 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghrwl\" (UniqueName: \"kubernetes.io/projected/eca0937f-f5d2-4682-9ac0-92041c4fac82-kube-api-access-ghrwl\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8\" (UID: \"eca0937f-f5d2-4682-9ac0-92041c4fac82\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.551702 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eca0937f-f5d2-4682-9ac0-92041c4fac82-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8\" (UID: \"eca0937f-f5d2-4682-9ac0-92041c4fac82\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.691643 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eca0937f-f5d2-4682-9ac0-92041c4fac82-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8\" (UID: \"eca0937f-f5d2-4682-9ac0-92041c4fac82\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.691856 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eca0937f-f5d2-4682-9ac0-92041c4fac82-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8\" (UID: \"eca0937f-f5d2-4682-9ac0-92041c4fac82\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.691959 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghrwl\" (UniqueName: \"kubernetes.io/projected/eca0937f-f5d2-4682-9ac0-92041c4fac82-kube-api-access-ghrwl\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8\" (UID: \"eca0937f-f5d2-4682-9ac0-92041c4fac82\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.697137 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eca0937f-f5d2-4682-9ac0-92041c4fac82-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8\" (UID: \"eca0937f-f5d2-4682-9ac0-92041c4fac82\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.703071 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eca0937f-f5d2-4682-9ac0-92041c4fac82-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8\" (UID: \"eca0937f-f5d2-4682-9ac0-92041c4fac82\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.714513 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghrwl\" (UniqueName: \"kubernetes.io/projected/eca0937f-f5d2-4682-9ac0-92041c4fac82-kube-api-access-ghrwl\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8\" (UID: \"eca0937f-f5d2-4682-9ac0-92041c4fac82\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" Jan 28 10:12:42 crc kubenswrapper[4735]: I0128 10:12:42.770544 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" Jan 28 10:12:43 crc kubenswrapper[4735]: I0128 10:12:43.293186 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8"] Jan 28 10:12:43 crc kubenswrapper[4735]: I0128 10:12:43.369796 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" event={"ID":"eca0937f-f5d2-4682-9ac0-92041c4fac82","Type":"ContainerStarted","Data":"ffb9e12f38d47c66e5f9b4a1768e1a59f448ffa31f874faab544153fa5d930c1"} Jan 28 10:12:44 crc kubenswrapper[4735]: I0128 10:12:44.378610 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" event={"ID":"eca0937f-f5d2-4682-9ac0-92041c4fac82","Type":"ContainerStarted","Data":"332844e7182dc2acf54ac25b4514c15a3fb5c389aa815dbf4db6a8cb2248cc95"} Jan 28 10:12:44 crc kubenswrapper[4735]: I0128 10:12:44.394845 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" podStartSLOduration=1.7317160010000001 podStartE2EDuration="2.394827028s" podCreationTimestamp="2026-01-28 10:12:42 +0000 UTC" firstStartedPulling="2026-01-28 10:12:43.294980477 +0000 UTC m=+1816.444644850" lastFinishedPulling="2026-01-28 10:12:43.958091504 +0000 UTC m=+1817.107755877" observedRunningTime="2026-01-28 10:12:44.393607388 +0000 UTC m=+1817.543271761" watchObservedRunningTime="2026-01-28 10:12:44.394827028 +0000 UTC m=+1817.544491401" Jan 28 10:12:49 crc kubenswrapper[4735]: I0128 10:12:49.425340 4735 generic.go:334] "Generic (PLEG): container finished" podID="eca0937f-f5d2-4682-9ac0-92041c4fac82" containerID="332844e7182dc2acf54ac25b4514c15a3fb5c389aa815dbf4db6a8cb2248cc95" exitCode=0 Jan 28 10:12:49 crc kubenswrapper[4735]: I0128 10:12:49.425428 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" event={"ID":"eca0937f-f5d2-4682-9ac0-92041c4fac82","Type":"ContainerDied","Data":"332844e7182dc2acf54ac25b4514c15a3fb5c389aa815dbf4db6a8cb2248cc95"} Jan 28 10:12:50 crc kubenswrapper[4735]: I0128 10:12:50.456064 4735 scope.go:117] "RemoveContainer" containerID="92f8b337df447f9facb2d511857802961202961143273836128a79c0c589cf13" Jan 28 10:12:50 crc kubenswrapper[4735]: I0128 10:12:50.490119 4735 scope.go:117] "RemoveContainer" containerID="04017251c9110f6a42b949fa822d03c3973285e1143a9dd2813dbdd6ed43a473" Jan 28 10:12:50 crc kubenswrapper[4735]: I0128 10:12:50.546831 4735 scope.go:117] "RemoveContainer" containerID="a7149b3da04b56778ba24d6f24abc08f69d788e2b3c8ef4ba59e7b72c6d781a7" Jan 28 10:12:50 crc kubenswrapper[4735]: I0128 10:12:50.619166 4735 scope.go:117] "RemoveContainer" containerID="673b5b614324c3abc57c20680c2865d50585019cc9431f3bc6a0a82e4beac5b7" Jan 28 10:12:50 crc kubenswrapper[4735]: I0128 10:12:50.657272 4735 scope.go:117] "RemoveContainer" containerID="02412a12c8f09d8576e1d131c29ea47c640222a66a52a1dc58ec790f5712c481" Jan 28 10:12:50 crc kubenswrapper[4735]: I0128 10:12:50.704426 4735 scope.go:117] "RemoveContainer" containerID="9746b8080e2af23eb02c552aab4c5f862b5f01055d86a9b94c03ddca827502f5" Jan 28 10:12:50 crc kubenswrapper[4735]: I0128 10:12:50.807663 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" Jan 28 10:12:50 crc kubenswrapper[4735]: I0128 10:12:50.968687 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghrwl\" (UniqueName: \"kubernetes.io/projected/eca0937f-f5d2-4682-9ac0-92041c4fac82-kube-api-access-ghrwl\") pod \"eca0937f-f5d2-4682-9ac0-92041c4fac82\" (UID: \"eca0937f-f5d2-4682-9ac0-92041c4fac82\") " Jan 28 10:12:50 crc kubenswrapper[4735]: I0128 10:12:50.969090 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eca0937f-f5d2-4682-9ac0-92041c4fac82-inventory\") pod \"eca0937f-f5d2-4682-9ac0-92041c4fac82\" (UID: \"eca0937f-f5d2-4682-9ac0-92041c4fac82\") " Jan 28 10:12:50 crc kubenswrapper[4735]: I0128 10:12:50.969131 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eca0937f-f5d2-4682-9ac0-92041c4fac82-ssh-key-openstack-edpm-ipam\") pod \"eca0937f-f5d2-4682-9ac0-92041c4fac82\" (UID: \"eca0937f-f5d2-4682-9ac0-92041c4fac82\") " Jan 28 10:12:50 crc kubenswrapper[4735]: I0128 10:12:50.974737 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eca0937f-f5d2-4682-9ac0-92041c4fac82-kube-api-access-ghrwl" (OuterVolumeSpecName: "kube-api-access-ghrwl") pod "eca0937f-f5d2-4682-9ac0-92041c4fac82" (UID: "eca0937f-f5d2-4682-9ac0-92041c4fac82"). InnerVolumeSpecName "kube-api-access-ghrwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:12:50 crc kubenswrapper[4735]: I0128 10:12:50.997225 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eca0937f-f5d2-4682-9ac0-92041c4fac82-inventory" (OuterVolumeSpecName: "inventory") pod "eca0937f-f5d2-4682-9ac0-92041c4fac82" (UID: "eca0937f-f5d2-4682-9ac0-92041c4fac82"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:12:50 crc kubenswrapper[4735]: I0128 10:12:50.997564 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eca0937f-f5d2-4682-9ac0-92041c4fac82-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eca0937f-f5d2-4682-9ac0-92041c4fac82" (UID: "eca0937f-f5d2-4682-9ac0-92041c4fac82"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.072125 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghrwl\" (UniqueName: \"kubernetes.io/projected/eca0937f-f5d2-4682-9ac0-92041c4fac82-kube-api-access-ghrwl\") on node \"crc\" DevicePath \"\"" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.072199 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eca0937f-f5d2-4682-9ac0-92041c4fac82-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.072212 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eca0937f-f5d2-4682-9ac0-92041c4fac82-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.444685 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" event={"ID":"eca0937f-f5d2-4682-9ac0-92041c4fac82","Type":"ContainerDied","Data":"ffb9e12f38d47c66e5f9b4a1768e1a59f448ffa31f874faab544153fa5d930c1"} Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.444731 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffb9e12f38d47c66e5f9b4a1768e1a59f448ffa31f874faab544153fa5d930c1" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.444769 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.503551 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:12:51 crc kubenswrapper[4735]: E0128 10:12:51.505032 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.530262 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml"] Jan 28 10:12:51 crc kubenswrapper[4735]: E0128 10:12:51.530792 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca0937f-f5d2-4682-9ac0-92041c4fac82" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.530813 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca0937f-f5d2-4682-9ac0-92041c4fac82" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.531027 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="eca0937f-f5d2-4682-9ac0-92041c4fac82" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.531731 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.534073 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j996r" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.534335 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.534406 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.539198 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.540117 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml"] Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.681626 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9rml\" (UID: \"3d0c7cb0-8a7d-4956-8cc0-c61510e64590\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.681800 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9rml\" (UID: \"3d0c7cb0-8a7d-4956-8cc0-c61510e64590\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.681930 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq5f4\" (UniqueName: \"kubernetes.io/projected/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-kube-api-access-qq5f4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9rml\" (UID: \"3d0c7cb0-8a7d-4956-8cc0-c61510e64590\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.783367 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9rml\" (UID: \"3d0c7cb0-8a7d-4956-8cc0-c61510e64590\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.783728 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9rml\" (UID: \"3d0c7cb0-8a7d-4956-8cc0-c61510e64590\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.783844 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq5f4\" (UniqueName: \"kubernetes.io/projected/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-kube-api-access-qq5f4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9rml\" (UID: \"3d0c7cb0-8a7d-4956-8cc0-c61510e64590\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.789488 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9rml\" (UID: \"3d0c7cb0-8a7d-4956-8cc0-c61510e64590\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.789769 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9rml\" (UID: \"3d0c7cb0-8a7d-4956-8cc0-c61510e64590\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.801067 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq5f4\" (UniqueName: \"kubernetes.io/projected/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-kube-api-access-qq5f4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c9rml\" (UID: \"3d0c7cb0-8a7d-4956-8cc0-c61510e64590\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" Jan 28 10:12:51 crc kubenswrapper[4735]: I0128 10:12:51.856066 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" Jan 28 10:12:52 crc kubenswrapper[4735]: I0128 10:12:52.410448 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml"] Jan 28 10:12:52 crc kubenswrapper[4735]: I0128 10:12:52.453384 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" event={"ID":"3d0c7cb0-8a7d-4956-8cc0-c61510e64590","Type":"ContainerStarted","Data":"022c76fac9b4d6e202d922c18bf6f5d66351ecaaabcd07e21c874d2b2e126635"} Jan 28 10:12:53 crc kubenswrapper[4735]: I0128 10:12:53.465689 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" event={"ID":"3d0c7cb0-8a7d-4956-8cc0-c61510e64590","Type":"ContainerStarted","Data":"b325ddff42cd885a7788775b6eb93f605e4c8a813780a682cab4ca97b5e5e7c4"} Jan 28 10:13:04 crc kubenswrapper[4735]: I0128 10:13:04.503656 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:13:04 crc kubenswrapper[4735]: E0128 10:13:04.504480 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:13:07 crc kubenswrapper[4735]: I0128 10:13:07.040104 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" podStartSLOduration=15.608769827 podStartE2EDuration="16.040080279s" podCreationTimestamp="2026-01-28 10:12:51 +0000 UTC" firstStartedPulling="2026-01-28 10:12:52.416791569 +0000 UTC m=+1825.566455942" lastFinishedPulling="2026-01-28 10:12:52.848102031 +0000 UTC m=+1825.997766394" observedRunningTime="2026-01-28 10:12:53.485521962 +0000 UTC m=+1826.635186355" watchObservedRunningTime="2026-01-28 10:13:07.040080279 +0000 UTC m=+1840.189744652" Jan 28 10:13:07 crc kubenswrapper[4735]: I0128 10:13:07.046170 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-p8gnh"] Jan 28 10:13:07 crc kubenswrapper[4735]: I0128 10:13:07.055728 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-p8gnh"] Jan 28 10:13:07 crc kubenswrapper[4735]: I0128 10:13:07.512375 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71e767c2-123a-497e-83b8-9fe0b8f45c77" path="/var/lib/kubelet/pods/71e767c2-123a-497e-83b8-9fe0b8f45c77/volumes" Jan 28 10:13:18 crc kubenswrapper[4735]: I0128 10:13:18.502471 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:13:18 crc kubenswrapper[4735]: I0128 10:13:18.710083 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"0ca59299c75c0261d224f376d225e4cfae82f9e72435765962d543c393946e9a"} Jan 28 10:13:28 crc kubenswrapper[4735]: I0128 10:13:28.800515 4735 generic.go:334] "Generic (PLEG): container finished" podID="3d0c7cb0-8a7d-4956-8cc0-c61510e64590" containerID="b325ddff42cd885a7788775b6eb93f605e4c8a813780a682cab4ca97b5e5e7c4" exitCode=0 Jan 28 10:13:28 crc kubenswrapper[4735]: I0128 10:13:28.800591 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" event={"ID":"3d0c7cb0-8a7d-4956-8cc0-c61510e64590","Type":"ContainerDied","Data":"b325ddff42cd885a7788775b6eb93f605e4c8a813780a682cab4ca97b5e5e7c4"} Jan 28 10:13:29 crc kubenswrapper[4735]: I0128 10:13:29.026621 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-fjbrl"] Jan 28 10:13:29 crc kubenswrapper[4735]: I0128 10:13:29.033677 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-fjbrl"] Jan 28 10:13:29 crc kubenswrapper[4735]: I0128 10:13:29.513994 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="553981e5-1a34-4046-b06f-cb8438042128" path="/var/lib/kubelet/pods/553981e5-1a34-4046-b06f-cb8438042128/volumes" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.056865 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vws4l"] Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.066669 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vws4l"] Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.235832 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.355894 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-ssh-key-openstack-edpm-ipam\") pod \"3d0c7cb0-8a7d-4956-8cc0-c61510e64590\" (UID: \"3d0c7cb0-8a7d-4956-8cc0-c61510e64590\") " Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.356049 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-inventory\") pod \"3d0c7cb0-8a7d-4956-8cc0-c61510e64590\" (UID: \"3d0c7cb0-8a7d-4956-8cc0-c61510e64590\") " Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.356170 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq5f4\" (UniqueName: \"kubernetes.io/projected/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-kube-api-access-qq5f4\") pod \"3d0c7cb0-8a7d-4956-8cc0-c61510e64590\" (UID: \"3d0c7cb0-8a7d-4956-8cc0-c61510e64590\") " Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.364991 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-kube-api-access-qq5f4" (OuterVolumeSpecName: "kube-api-access-qq5f4") pod "3d0c7cb0-8a7d-4956-8cc0-c61510e64590" (UID: "3d0c7cb0-8a7d-4956-8cc0-c61510e64590"). InnerVolumeSpecName "kube-api-access-qq5f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.387249 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3d0c7cb0-8a7d-4956-8cc0-c61510e64590" (UID: "3d0c7cb0-8a7d-4956-8cc0-c61510e64590"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.407964 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-inventory" (OuterVolumeSpecName: "inventory") pod "3d0c7cb0-8a7d-4956-8cc0-c61510e64590" (UID: "3d0c7cb0-8a7d-4956-8cc0-c61510e64590"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.458402 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq5f4\" (UniqueName: \"kubernetes.io/projected/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-kube-api-access-qq5f4\") on node \"crc\" DevicePath \"\"" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.458437 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.458451 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0c7cb0-8a7d-4956-8cc0-c61510e64590-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.820306 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" event={"ID":"3d0c7cb0-8a7d-4956-8cc0-c61510e64590","Type":"ContainerDied","Data":"022c76fac9b4d6e202d922c18bf6f5d66351ecaaabcd07e21c874d2b2e126635"} Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.820557 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="022c76fac9b4d6e202d922c18bf6f5d66351ecaaabcd07e21c874d2b2e126635" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.820363 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c9rml" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.906160 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22"] Jan 28 10:13:30 crc kubenswrapper[4735]: E0128 10:13:30.906521 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0c7cb0-8a7d-4956-8cc0-c61510e64590" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.906540 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0c7cb0-8a7d-4956-8cc0-c61510e64590" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.906690 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0c7cb0-8a7d-4956-8cc0-c61510e64590" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.907287 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.909650 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.909869 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.916371 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j996r" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.919137 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 10:13:30 crc kubenswrapper[4735]: I0128 10:13:30.943878 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22"] Jan 28 10:13:31 crc kubenswrapper[4735]: I0128 10:13:31.068811 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9lkg\" (UniqueName: \"kubernetes.io/projected/f77b490a-f09b-4e92-a529-69d81c87478e-kube-api-access-m9lkg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8b22\" (UID: \"f77b490a-f09b-4e92-a529-69d81c87478e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" Jan 28 10:13:31 crc kubenswrapper[4735]: I0128 10:13:31.068902 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f77b490a-f09b-4e92-a529-69d81c87478e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8b22\" (UID: \"f77b490a-f09b-4e92-a529-69d81c87478e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" Jan 28 10:13:31 crc kubenswrapper[4735]: I0128 10:13:31.068970 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f77b490a-f09b-4e92-a529-69d81c87478e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8b22\" (UID: \"f77b490a-f09b-4e92-a529-69d81c87478e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" Jan 28 10:13:31 crc kubenswrapper[4735]: I0128 10:13:31.194672 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f77b490a-f09b-4e92-a529-69d81c87478e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8b22\" (UID: \"f77b490a-f09b-4e92-a529-69d81c87478e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" Jan 28 10:13:31 crc kubenswrapper[4735]: I0128 10:13:31.195109 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f77b490a-f09b-4e92-a529-69d81c87478e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8b22\" (UID: \"f77b490a-f09b-4e92-a529-69d81c87478e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" Jan 28 10:13:31 crc kubenswrapper[4735]: I0128 10:13:31.195284 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9lkg\" (UniqueName: \"kubernetes.io/projected/f77b490a-f09b-4e92-a529-69d81c87478e-kube-api-access-m9lkg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8b22\" (UID: \"f77b490a-f09b-4e92-a529-69d81c87478e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" Jan 28 10:13:31 crc kubenswrapper[4735]: I0128 10:13:31.198836 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f77b490a-f09b-4e92-a529-69d81c87478e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8b22\" (UID: \"f77b490a-f09b-4e92-a529-69d81c87478e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" Jan 28 10:13:31 crc kubenswrapper[4735]: I0128 10:13:31.199590 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f77b490a-f09b-4e92-a529-69d81c87478e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8b22\" (UID: \"f77b490a-f09b-4e92-a529-69d81c87478e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" Jan 28 10:13:31 crc kubenswrapper[4735]: I0128 10:13:31.213321 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9lkg\" (UniqueName: \"kubernetes.io/projected/f77b490a-f09b-4e92-a529-69d81c87478e-kube-api-access-m9lkg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8b22\" (UID: \"f77b490a-f09b-4e92-a529-69d81c87478e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" Jan 28 10:13:31 crc kubenswrapper[4735]: I0128 10:13:31.238440 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" Jan 28 10:13:31 crc kubenswrapper[4735]: I0128 10:13:31.512706 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7066dce-cc3b-4aff-9747-d10ef6eeb37d" path="/var/lib/kubelet/pods/d7066dce-cc3b-4aff-9747-d10ef6eeb37d/volumes" Jan 28 10:13:31 crc kubenswrapper[4735]: I0128 10:13:31.763022 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22"] Jan 28 10:13:31 crc kubenswrapper[4735]: I0128 10:13:31.829956 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" event={"ID":"f77b490a-f09b-4e92-a529-69d81c87478e","Type":"ContainerStarted","Data":"3d094ac5d044bfeee4b269ee84059f5b9df16fd1401a405a080bd1a5fc0824d4"} Jan 28 10:13:32 crc kubenswrapper[4735]: I0128 10:13:32.839694 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" event={"ID":"f77b490a-f09b-4e92-a529-69d81c87478e","Type":"ContainerStarted","Data":"5e10a8e15ed29271f0c598fac0ba237eb8e9ee3fc3c98acce92d9ec1e0eda2f5"} Jan 28 10:13:32 crc kubenswrapper[4735]: I0128 10:13:32.856609 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" podStartSLOduration=2.340926614 podStartE2EDuration="2.856579761s" podCreationTimestamp="2026-01-28 10:13:30 +0000 UTC" firstStartedPulling="2026-01-28 10:13:31.764069648 +0000 UTC m=+1864.913734031" lastFinishedPulling="2026-01-28 10:13:32.279722805 +0000 UTC m=+1865.429387178" observedRunningTime="2026-01-28 10:13:32.856334495 +0000 UTC m=+1866.005998898" watchObservedRunningTime="2026-01-28 10:13:32.856579761 +0000 UTC m=+1866.006244134" Jan 28 10:13:50 crc kubenswrapper[4735]: I0128 10:13:50.941496 4735 scope.go:117] "RemoveContainer" containerID="0ddf1a6cbd8429be5326f7cee4fea1624e6f7cf6e53ea11ce4c15b1c9bda090e" Jan 28 10:13:50 crc kubenswrapper[4735]: I0128 10:13:50.983597 4735 scope.go:117] "RemoveContainer" containerID="82bd16090dd3bf04431102c84187e9644ed15ce87d7c7c93e9041f1f02e7828a" Jan 28 10:13:51 crc kubenswrapper[4735]: I0128 10:13:51.055597 4735 scope.go:117] "RemoveContainer" containerID="b4cb50e9b49ad2276bedf78883f1c90f67839d845413f1e6b967cea5b35c3212" Jan 28 10:14:10 crc kubenswrapper[4735]: I0128 10:14:10.505791 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gfb2n"] Jan 28 10:14:10 crc kubenswrapper[4735]: I0128 10:14:10.508561 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gfb2n" Jan 28 10:14:10 crc kubenswrapper[4735]: I0128 10:14:10.518096 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gfb2n"] Jan 28 10:14:10 crc kubenswrapper[4735]: I0128 10:14:10.675732 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4702622-186f-441d-b1e9-7ecc487ca388-utilities\") pod \"certified-operators-gfb2n\" (UID: \"e4702622-186f-441d-b1e9-7ecc487ca388\") " pod="openshift-marketplace/certified-operators-gfb2n" Jan 28 10:14:10 crc kubenswrapper[4735]: I0128 10:14:10.675957 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4702622-186f-441d-b1e9-7ecc487ca388-catalog-content\") pod \"certified-operators-gfb2n\" (UID: \"e4702622-186f-441d-b1e9-7ecc487ca388\") " pod="openshift-marketplace/certified-operators-gfb2n" Jan 28 10:14:10 crc kubenswrapper[4735]: I0128 10:14:10.676008 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prbgk\" (UniqueName: \"kubernetes.io/projected/e4702622-186f-441d-b1e9-7ecc487ca388-kube-api-access-prbgk\") pod \"certified-operators-gfb2n\" (UID: \"e4702622-186f-441d-b1e9-7ecc487ca388\") " pod="openshift-marketplace/certified-operators-gfb2n" Jan 28 10:14:10 crc kubenswrapper[4735]: I0128 10:14:10.778195 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4702622-186f-441d-b1e9-7ecc487ca388-utilities\") pod \"certified-operators-gfb2n\" (UID: \"e4702622-186f-441d-b1e9-7ecc487ca388\") " pod="openshift-marketplace/certified-operators-gfb2n" Jan 28 10:14:10 crc kubenswrapper[4735]: I0128 10:14:10.778727 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4702622-186f-441d-b1e9-7ecc487ca388-utilities\") pod \"certified-operators-gfb2n\" (UID: \"e4702622-186f-441d-b1e9-7ecc487ca388\") " pod="openshift-marketplace/certified-operators-gfb2n" Jan 28 10:14:10 crc kubenswrapper[4735]: I0128 10:14:10.779027 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4702622-186f-441d-b1e9-7ecc487ca388-catalog-content\") pod \"certified-operators-gfb2n\" (UID: \"e4702622-186f-441d-b1e9-7ecc487ca388\") " pod="openshift-marketplace/certified-operators-gfb2n" Jan 28 10:14:10 crc kubenswrapper[4735]: I0128 10:14:10.779332 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4702622-186f-441d-b1e9-7ecc487ca388-catalog-content\") pod \"certified-operators-gfb2n\" (UID: \"e4702622-186f-441d-b1e9-7ecc487ca388\") " pod="openshift-marketplace/certified-operators-gfb2n" Jan 28 10:14:10 crc kubenswrapper[4735]: I0128 10:14:10.779418 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prbgk\" (UniqueName: \"kubernetes.io/projected/e4702622-186f-441d-b1e9-7ecc487ca388-kube-api-access-prbgk\") pod \"certified-operators-gfb2n\" (UID: \"e4702622-186f-441d-b1e9-7ecc487ca388\") " pod="openshift-marketplace/certified-operators-gfb2n" Jan 28 10:14:10 crc kubenswrapper[4735]: I0128 10:14:10.801290 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prbgk\" (UniqueName: \"kubernetes.io/projected/e4702622-186f-441d-b1e9-7ecc487ca388-kube-api-access-prbgk\") pod \"certified-operators-gfb2n\" (UID: \"e4702622-186f-441d-b1e9-7ecc487ca388\") " pod="openshift-marketplace/certified-operators-gfb2n" Jan 28 10:14:10 crc kubenswrapper[4735]: I0128 10:14:10.838957 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gfb2n" Jan 28 10:14:11 crc kubenswrapper[4735]: I0128 10:14:11.361786 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gfb2n"] Jan 28 10:14:12 crc kubenswrapper[4735]: I0128 10:14:12.189877 4735 generic.go:334] "Generic (PLEG): container finished" podID="e4702622-186f-441d-b1e9-7ecc487ca388" containerID="ebeee67be5a3be68a794aabece94245488d8b44355e34378458a6337b27cf915" exitCode=0 Jan 28 10:14:12 crc kubenswrapper[4735]: I0128 10:14:12.190192 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfb2n" event={"ID":"e4702622-186f-441d-b1e9-7ecc487ca388","Type":"ContainerDied","Data":"ebeee67be5a3be68a794aabece94245488d8b44355e34378458a6337b27cf915"} Jan 28 10:14:12 crc kubenswrapper[4735]: I0128 10:14:12.190223 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfb2n" event={"ID":"e4702622-186f-441d-b1e9-7ecc487ca388","Type":"ContainerStarted","Data":"fb627d4d22f73539b96fad97ce935150c8aa31637ac1231b9e01ffec0ecdcd08"} Jan 28 10:14:14 crc kubenswrapper[4735]: I0128 10:14:14.041710 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-bzs7q"] Jan 28 10:14:14 crc kubenswrapper[4735]: I0128 10:14:14.049490 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-bzs7q"] Jan 28 10:14:14 crc kubenswrapper[4735]: I0128 10:14:14.206740 4735 generic.go:334] "Generic (PLEG): container finished" podID="e4702622-186f-441d-b1e9-7ecc487ca388" containerID="45d789ccc84fee5102d29518fa8c7629b0b069604878f6c0a6eb4d90ae9ec8d6" exitCode=0 Jan 28 10:14:14 crc kubenswrapper[4735]: I0128 10:14:14.206801 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfb2n" event={"ID":"e4702622-186f-441d-b1e9-7ecc487ca388","Type":"ContainerDied","Data":"45d789ccc84fee5102d29518fa8c7629b0b069604878f6c0a6eb4d90ae9ec8d6"} Jan 28 10:14:15 crc kubenswrapper[4735]: I0128 10:14:15.219254 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfb2n" event={"ID":"e4702622-186f-441d-b1e9-7ecc487ca388","Type":"ContainerStarted","Data":"d51a32fb02701dcdba9de3bbb9035cf27b92b770a83b2f958201198dbc5e5132"} Jan 28 10:14:15 crc kubenswrapper[4735]: I0128 10:14:15.238028 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gfb2n" podStartSLOduration=2.812227613 podStartE2EDuration="5.238012099s" podCreationTimestamp="2026-01-28 10:14:10 +0000 UTC" firstStartedPulling="2026-01-28 10:14:12.192127044 +0000 UTC m=+1905.341791417" lastFinishedPulling="2026-01-28 10:14:14.61791153 +0000 UTC m=+1907.767575903" observedRunningTime="2026-01-28 10:14:15.2355815 +0000 UTC m=+1908.385245873" watchObservedRunningTime="2026-01-28 10:14:15.238012099 +0000 UTC m=+1908.387676472" Jan 28 10:14:15 crc kubenswrapper[4735]: I0128 10:14:15.516319 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd5a59ad-84e0-4e0c-89a8-994b538ea4e1" path="/var/lib/kubelet/pods/fd5a59ad-84e0-4e0c-89a8-994b538ea4e1/volumes" Jan 28 10:14:20 crc kubenswrapper[4735]: I0128 10:14:20.839842 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gfb2n" Jan 28 10:14:20 crc kubenswrapper[4735]: I0128 10:14:20.840428 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gfb2n" Jan 28 10:14:20 crc kubenswrapper[4735]: I0128 10:14:20.885138 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gfb2n" Jan 28 10:14:21 crc kubenswrapper[4735]: I0128 10:14:21.267831 4735 generic.go:334] "Generic (PLEG): container finished" podID="f77b490a-f09b-4e92-a529-69d81c87478e" containerID="5e10a8e15ed29271f0c598fac0ba237eb8e9ee3fc3c98acce92d9ec1e0eda2f5" exitCode=0 Jan 28 10:14:21 crc kubenswrapper[4735]: I0128 10:14:21.267874 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" event={"ID":"f77b490a-f09b-4e92-a529-69d81c87478e","Type":"ContainerDied","Data":"5e10a8e15ed29271f0c598fac0ba237eb8e9ee3fc3c98acce92d9ec1e0eda2f5"} Jan 28 10:14:21 crc kubenswrapper[4735]: I0128 10:14:21.323792 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gfb2n" Jan 28 10:14:21 crc kubenswrapper[4735]: I0128 10:14:21.374106 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gfb2n"] Jan 28 10:14:22 crc kubenswrapper[4735]: I0128 10:14:22.684806 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" Jan 28 10:14:22 crc kubenswrapper[4735]: I0128 10:14:22.811395 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f77b490a-f09b-4e92-a529-69d81c87478e-inventory\") pod \"f77b490a-f09b-4e92-a529-69d81c87478e\" (UID: \"f77b490a-f09b-4e92-a529-69d81c87478e\") " Jan 28 10:14:22 crc kubenswrapper[4735]: I0128 10:14:22.811735 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f77b490a-f09b-4e92-a529-69d81c87478e-ssh-key-openstack-edpm-ipam\") pod \"f77b490a-f09b-4e92-a529-69d81c87478e\" (UID: \"f77b490a-f09b-4e92-a529-69d81c87478e\") " Jan 28 10:14:22 crc kubenswrapper[4735]: I0128 10:14:22.812050 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9lkg\" (UniqueName: \"kubernetes.io/projected/f77b490a-f09b-4e92-a529-69d81c87478e-kube-api-access-m9lkg\") pod \"f77b490a-f09b-4e92-a529-69d81c87478e\" (UID: \"f77b490a-f09b-4e92-a529-69d81c87478e\") " Jan 28 10:14:22 crc kubenswrapper[4735]: I0128 10:14:22.819596 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f77b490a-f09b-4e92-a529-69d81c87478e-kube-api-access-m9lkg" (OuterVolumeSpecName: "kube-api-access-m9lkg") pod "f77b490a-f09b-4e92-a529-69d81c87478e" (UID: "f77b490a-f09b-4e92-a529-69d81c87478e"). InnerVolumeSpecName "kube-api-access-m9lkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:14:22 crc kubenswrapper[4735]: I0128 10:14:22.842903 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f77b490a-f09b-4e92-a529-69d81c87478e-inventory" (OuterVolumeSpecName: "inventory") pod "f77b490a-f09b-4e92-a529-69d81c87478e" (UID: "f77b490a-f09b-4e92-a529-69d81c87478e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:14:22 crc kubenswrapper[4735]: I0128 10:14:22.843329 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f77b490a-f09b-4e92-a529-69d81c87478e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f77b490a-f09b-4e92-a529-69d81c87478e" (UID: "f77b490a-f09b-4e92-a529-69d81c87478e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:14:22 crc kubenswrapper[4735]: I0128 10:14:22.914707 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9lkg\" (UniqueName: \"kubernetes.io/projected/f77b490a-f09b-4e92-a529-69d81c87478e-kube-api-access-m9lkg\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:22 crc kubenswrapper[4735]: I0128 10:14:22.914981 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f77b490a-f09b-4e92-a529-69d81c87478e-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:22 crc kubenswrapper[4735]: I0128 10:14:22.915104 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f77b490a-f09b-4e92-a529-69d81c87478e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.289056 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.289077 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8b22" event={"ID":"f77b490a-f09b-4e92-a529-69d81c87478e","Type":"ContainerDied","Data":"3d094ac5d044bfeee4b269ee84059f5b9df16fd1401a405a080bd1a5fc0824d4"} Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.289120 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d094ac5d044bfeee4b269ee84059f5b9df16fd1401a405a080bd1a5fc0824d4" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.289169 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gfb2n" podUID="e4702622-186f-441d-b1e9-7ecc487ca388" containerName="registry-server" containerID="cri-o://d51a32fb02701dcdba9de3bbb9035cf27b92b770a83b2f958201198dbc5e5132" gracePeriod=2 Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.413714 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5tc5h"] Jan 28 10:14:23 crc kubenswrapper[4735]: E0128 10:14:23.414376 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77b490a-f09b-4e92-a529-69d81c87478e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.414454 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77b490a-f09b-4e92-a529-69d81c87478e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.414710 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f77b490a-f09b-4e92-a529-69d81c87478e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.415578 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.418396 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.418412 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j996r" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.419927 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.419930 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.428547 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5tc5h"] Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.528775 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7whk\" (UniqueName: \"kubernetes.io/projected/812becf2-d241-43e6-b9e4-58263bbfb115-kube-api-access-h7whk\") pod \"ssh-known-hosts-edpm-deployment-5tc5h\" (UID: \"812becf2-d241-43e6-b9e4-58263bbfb115\") " pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.529100 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/812becf2-d241-43e6-b9e4-58263bbfb115-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5tc5h\" (UID: \"812becf2-d241-43e6-b9e4-58263bbfb115\") " pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.529242 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/812becf2-d241-43e6-b9e4-58263bbfb115-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5tc5h\" (UID: \"812becf2-d241-43e6-b9e4-58263bbfb115\") " pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.630616 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/812becf2-d241-43e6-b9e4-58263bbfb115-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5tc5h\" (UID: \"812becf2-d241-43e6-b9e4-58263bbfb115\") " pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.630681 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7whk\" (UniqueName: \"kubernetes.io/projected/812becf2-d241-43e6-b9e4-58263bbfb115-kube-api-access-h7whk\") pod \"ssh-known-hosts-edpm-deployment-5tc5h\" (UID: \"812becf2-d241-43e6-b9e4-58263bbfb115\") " pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.630713 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/812becf2-d241-43e6-b9e4-58263bbfb115-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5tc5h\" (UID: \"812becf2-d241-43e6-b9e4-58263bbfb115\") " pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.634968 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/812becf2-d241-43e6-b9e4-58263bbfb115-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5tc5h\" (UID: \"812becf2-d241-43e6-b9e4-58263bbfb115\") " pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.641152 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/812becf2-d241-43e6-b9e4-58263bbfb115-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5tc5h\" (UID: \"812becf2-d241-43e6-b9e4-58263bbfb115\") " pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.654763 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7whk\" (UniqueName: \"kubernetes.io/projected/812becf2-d241-43e6-b9e4-58263bbfb115-kube-api-access-h7whk\") pod \"ssh-known-hosts-edpm-deployment-5tc5h\" (UID: \"812becf2-d241-43e6-b9e4-58263bbfb115\") " pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.728450 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gfb2n" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.788331 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.834321 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4702622-186f-441d-b1e9-7ecc487ca388-catalog-content\") pod \"e4702622-186f-441d-b1e9-7ecc487ca388\" (UID: \"e4702622-186f-441d-b1e9-7ecc487ca388\") " Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.834593 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prbgk\" (UniqueName: \"kubernetes.io/projected/e4702622-186f-441d-b1e9-7ecc487ca388-kube-api-access-prbgk\") pod \"e4702622-186f-441d-b1e9-7ecc487ca388\" (UID: \"e4702622-186f-441d-b1e9-7ecc487ca388\") " Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.834664 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4702622-186f-441d-b1e9-7ecc487ca388-utilities\") pod \"e4702622-186f-441d-b1e9-7ecc487ca388\" (UID: \"e4702622-186f-441d-b1e9-7ecc487ca388\") " Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.835659 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4702622-186f-441d-b1e9-7ecc487ca388-utilities" (OuterVolumeSpecName: "utilities") pod "e4702622-186f-441d-b1e9-7ecc487ca388" (UID: "e4702622-186f-441d-b1e9-7ecc487ca388"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.838729 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4702622-186f-441d-b1e9-7ecc487ca388-kube-api-access-prbgk" (OuterVolumeSpecName: "kube-api-access-prbgk") pod "e4702622-186f-441d-b1e9-7ecc487ca388" (UID: "e4702622-186f-441d-b1e9-7ecc487ca388"). InnerVolumeSpecName "kube-api-access-prbgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.906407 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4702622-186f-441d-b1e9-7ecc487ca388-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4702622-186f-441d-b1e9-7ecc487ca388" (UID: "e4702622-186f-441d-b1e9-7ecc487ca388"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.937498 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prbgk\" (UniqueName: \"kubernetes.io/projected/e4702622-186f-441d-b1e9-7ecc487ca388-kube-api-access-prbgk\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.937561 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4702622-186f-441d-b1e9-7ecc487ca388-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:23 crc kubenswrapper[4735]: I0128 10:14:23.937577 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4702622-186f-441d-b1e9-7ecc487ca388-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.304366 4735 generic.go:334] "Generic (PLEG): container finished" podID="e4702622-186f-441d-b1e9-7ecc487ca388" containerID="d51a32fb02701dcdba9de3bbb9035cf27b92b770a83b2f958201198dbc5e5132" exitCode=0 Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.304531 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gfb2n" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.304553 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfb2n" event={"ID":"e4702622-186f-441d-b1e9-7ecc487ca388","Type":"ContainerDied","Data":"d51a32fb02701dcdba9de3bbb9035cf27b92b770a83b2f958201198dbc5e5132"} Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.304721 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gfb2n" event={"ID":"e4702622-186f-441d-b1e9-7ecc487ca388","Type":"ContainerDied","Data":"fb627d4d22f73539b96fad97ce935150c8aa31637ac1231b9e01ffec0ecdcd08"} Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.304858 4735 scope.go:117] "RemoveContainer" containerID="d51a32fb02701dcdba9de3bbb9035cf27b92b770a83b2f958201198dbc5e5132" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.328957 4735 scope.go:117] "RemoveContainer" containerID="45d789ccc84fee5102d29518fa8c7629b0b069604878f6c0a6eb4d90ae9ec8d6" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.346257 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gfb2n"] Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.356079 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gfb2n"] Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.367969 4735 scope.go:117] "RemoveContainer" containerID="ebeee67be5a3be68a794aabece94245488d8b44355e34378458a6337b27cf915" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.379249 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5tc5h"] Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.391635 4735 scope.go:117] "RemoveContainer" containerID="d51a32fb02701dcdba9de3bbb9035cf27b92b770a83b2f958201198dbc5e5132" Jan 28 10:14:24 crc kubenswrapper[4735]: E0128 10:14:24.393928 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d51a32fb02701dcdba9de3bbb9035cf27b92b770a83b2f958201198dbc5e5132\": container with ID starting with d51a32fb02701dcdba9de3bbb9035cf27b92b770a83b2f958201198dbc5e5132 not found: ID does not exist" containerID="d51a32fb02701dcdba9de3bbb9035cf27b92b770a83b2f958201198dbc5e5132" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.393965 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d51a32fb02701dcdba9de3bbb9035cf27b92b770a83b2f958201198dbc5e5132"} err="failed to get container status \"d51a32fb02701dcdba9de3bbb9035cf27b92b770a83b2f958201198dbc5e5132\": rpc error: code = NotFound desc = could not find container \"d51a32fb02701dcdba9de3bbb9035cf27b92b770a83b2f958201198dbc5e5132\": container with ID starting with d51a32fb02701dcdba9de3bbb9035cf27b92b770a83b2f958201198dbc5e5132 not found: ID does not exist" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.394010 4735 scope.go:117] "RemoveContainer" containerID="45d789ccc84fee5102d29518fa8c7629b0b069604878f6c0a6eb4d90ae9ec8d6" Jan 28 10:14:24 crc kubenswrapper[4735]: E0128 10:14:24.395132 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45d789ccc84fee5102d29518fa8c7629b0b069604878f6c0a6eb4d90ae9ec8d6\": container with ID starting with 45d789ccc84fee5102d29518fa8c7629b0b069604878f6c0a6eb4d90ae9ec8d6 not found: ID does not exist" containerID="45d789ccc84fee5102d29518fa8c7629b0b069604878f6c0a6eb4d90ae9ec8d6" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.395190 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d789ccc84fee5102d29518fa8c7629b0b069604878f6c0a6eb4d90ae9ec8d6"} err="failed to get container status \"45d789ccc84fee5102d29518fa8c7629b0b069604878f6c0a6eb4d90ae9ec8d6\": rpc error: code = NotFound desc = could not find container \"45d789ccc84fee5102d29518fa8c7629b0b069604878f6c0a6eb4d90ae9ec8d6\": container with ID starting with 45d789ccc84fee5102d29518fa8c7629b0b069604878f6c0a6eb4d90ae9ec8d6 not found: ID does not exist" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.395208 4735 scope.go:117] "RemoveContainer" containerID="ebeee67be5a3be68a794aabece94245488d8b44355e34378458a6337b27cf915" Jan 28 10:14:24 crc kubenswrapper[4735]: E0128 10:14:24.395950 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebeee67be5a3be68a794aabece94245488d8b44355e34378458a6337b27cf915\": container with ID starting with ebeee67be5a3be68a794aabece94245488d8b44355e34378458a6337b27cf915 not found: ID does not exist" containerID="ebeee67be5a3be68a794aabece94245488d8b44355e34378458a6337b27cf915" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.395985 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebeee67be5a3be68a794aabece94245488d8b44355e34378458a6337b27cf915"} err="failed to get container status \"ebeee67be5a3be68a794aabece94245488d8b44355e34378458a6337b27cf915\": rpc error: code = NotFound desc = could not find container \"ebeee67be5a3be68a794aabece94245488d8b44355e34378458a6337b27cf915\": container with ID starting with ebeee67be5a3be68a794aabece94245488d8b44355e34378458a6337b27cf915 not found: ID does not exist" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.531532 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9scs8"] Jan 28 10:14:24 crc kubenswrapper[4735]: E0128 10:14:24.531911 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4702622-186f-441d-b1e9-7ecc487ca388" containerName="registry-server" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.531923 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4702622-186f-441d-b1e9-7ecc487ca388" containerName="registry-server" Jan 28 10:14:24 crc kubenswrapper[4735]: E0128 10:14:24.531945 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4702622-186f-441d-b1e9-7ecc487ca388" containerName="extract-utilities" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.531951 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4702622-186f-441d-b1e9-7ecc487ca388" containerName="extract-utilities" Jan 28 10:14:24 crc kubenswrapper[4735]: E0128 10:14:24.531970 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4702622-186f-441d-b1e9-7ecc487ca388" containerName="extract-content" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.531977 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4702622-186f-441d-b1e9-7ecc487ca388" containerName="extract-content" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.532168 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4702622-186f-441d-b1e9-7ecc487ca388" containerName="registry-server" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.533558 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9scs8" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.551328 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9scs8"] Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.664093 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502c580a-1fff-417c-ac48-b6f1008b85df-catalog-content\") pod \"redhat-operators-9scs8\" (UID: \"502c580a-1fff-417c-ac48-b6f1008b85df\") " pod="openshift-marketplace/redhat-operators-9scs8" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.664176 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7msh7\" (UniqueName: \"kubernetes.io/projected/502c580a-1fff-417c-ac48-b6f1008b85df-kube-api-access-7msh7\") pod \"redhat-operators-9scs8\" (UID: \"502c580a-1fff-417c-ac48-b6f1008b85df\") " pod="openshift-marketplace/redhat-operators-9scs8" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.664204 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502c580a-1fff-417c-ac48-b6f1008b85df-utilities\") pod \"redhat-operators-9scs8\" (UID: \"502c580a-1fff-417c-ac48-b6f1008b85df\") " pod="openshift-marketplace/redhat-operators-9scs8" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.766503 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502c580a-1fff-417c-ac48-b6f1008b85df-catalog-content\") pod \"redhat-operators-9scs8\" (UID: \"502c580a-1fff-417c-ac48-b6f1008b85df\") " pod="openshift-marketplace/redhat-operators-9scs8" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.766581 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7msh7\" (UniqueName: \"kubernetes.io/projected/502c580a-1fff-417c-ac48-b6f1008b85df-kube-api-access-7msh7\") pod \"redhat-operators-9scs8\" (UID: \"502c580a-1fff-417c-ac48-b6f1008b85df\") " pod="openshift-marketplace/redhat-operators-9scs8" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.766611 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502c580a-1fff-417c-ac48-b6f1008b85df-utilities\") pod \"redhat-operators-9scs8\" (UID: \"502c580a-1fff-417c-ac48-b6f1008b85df\") " pod="openshift-marketplace/redhat-operators-9scs8" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.767124 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502c580a-1fff-417c-ac48-b6f1008b85df-catalog-content\") pod \"redhat-operators-9scs8\" (UID: \"502c580a-1fff-417c-ac48-b6f1008b85df\") " pod="openshift-marketplace/redhat-operators-9scs8" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.767143 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502c580a-1fff-417c-ac48-b6f1008b85df-utilities\") pod \"redhat-operators-9scs8\" (UID: \"502c580a-1fff-417c-ac48-b6f1008b85df\") " pod="openshift-marketplace/redhat-operators-9scs8" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.788287 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7msh7\" (UniqueName: \"kubernetes.io/projected/502c580a-1fff-417c-ac48-b6f1008b85df-kube-api-access-7msh7\") pod \"redhat-operators-9scs8\" (UID: \"502c580a-1fff-417c-ac48-b6f1008b85df\") " pod="openshift-marketplace/redhat-operators-9scs8" Jan 28 10:14:24 crc kubenswrapper[4735]: I0128 10:14:24.874386 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9scs8" Jan 28 10:14:25 crc kubenswrapper[4735]: I0128 10:14:25.314734 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" event={"ID":"812becf2-d241-43e6-b9e4-58263bbfb115","Type":"ContainerStarted","Data":"292c6006fd44d2213cb1b92c39caaf87fa0f67acb071a629c40aeb23b03c4045"} Jan 28 10:14:25 crc kubenswrapper[4735]: I0128 10:14:25.436154 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9scs8"] Jan 28 10:14:25 crc kubenswrapper[4735]: I0128 10:14:25.525081 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4702622-186f-441d-b1e9-7ecc487ca388" path="/var/lib/kubelet/pods/e4702622-186f-441d-b1e9-7ecc487ca388/volumes" Jan 28 10:14:26 crc kubenswrapper[4735]: I0128 10:14:26.337369 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" event={"ID":"812becf2-d241-43e6-b9e4-58263bbfb115","Type":"ContainerStarted","Data":"ed805e848e9439bd29341f9e67c316772cb5b1e79fe149acf0bb65c825e5ccb7"} Jan 28 10:14:26 crc kubenswrapper[4735]: I0128 10:14:26.343244 4735 generic.go:334] "Generic (PLEG): container finished" podID="502c580a-1fff-417c-ac48-b6f1008b85df" containerID="fbc63208cdc07ef965a9ceac15045709d786a0584a02e70a77060759715f8e67" exitCode=0 Jan 28 10:14:26 crc kubenswrapper[4735]: I0128 10:14:26.343292 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9scs8" event={"ID":"502c580a-1fff-417c-ac48-b6f1008b85df","Type":"ContainerDied","Data":"fbc63208cdc07ef965a9ceac15045709d786a0584a02e70a77060759715f8e67"} Jan 28 10:14:26 crc kubenswrapper[4735]: I0128 10:14:26.343347 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9scs8" event={"ID":"502c580a-1fff-417c-ac48-b6f1008b85df","Type":"ContainerStarted","Data":"17e7621ae806a83760a6b124afc7cb9cc17fa6d5e1073a70b8222e742f91cf1a"} Jan 28 10:14:26 crc kubenswrapper[4735]: I0128 10:14:26.360242 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" podStartSLOduration=2.491514368 podStartE2EDuration="3.360213s" podCreationTimestamp="2026-01-28 10:14:23 +0000 UTC" firstStartedPulling="2026-01-28 10:14:24.391634606 +0000 UTC m=+1917.541298979" lastFinishedPulling="2026-01-28 10:14:25.260333238 +0000 UTC m=+1918.409997611" observedRunningTime="2026-01-28 10:14:26.358232391 +0000 UTC m=+1919.507896774" watchObservedRunningTime="2026-01-28 10:14:26.360213 +0000 UTC m=+1919.509877373" Jan 28 10:14:27 crc kubenswrapper[4735]: I0128 10:14:27.561301 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pclrq"] Jan 28 10:14:27 crc kubenswrapper[4735]: I0128 10:14:27.565153 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pclrq" Jan 28 10:14:27 crc kubenswrapper[4735]: I0128 10:14:27.588691 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pclrq"] Jan 28 10:14:27 crc kubenswrapper[4735]: I0128 10:14:27.648070 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvvzk\" (UniqueName: \"kubernetes.io/projected/4ae16cda-c070-41a7-be09-59ad28b25d0e-kube-api-access-pvvzk\") pod \"community-operators-pclrq\" (UID: \"4ae16cda-c070-41a7-be09-59ad28b25d0e\") " pod="openshift-marketplace/community-operators-pclrq" Jan 28 10:14:27 crc kubenswrapper[4735]: I0128 10:14:27.648677 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae16cda-c070-41a7-be09-59ad28b25d0e-catalog-content\") pod \"community-operators-pclrq\" (UID: \"4ae16cda-c070-41a7-be09-59ad28b25d0e\") " pod="openshift-marketplace/community-operators-pclrq" Jan 28 10:14:27 crc kubenswrapper[4735]: I0128 10:14:27.648943 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae16cda-c070-41a7-be09-59ad28b25d0e-utilities\") pod \"community-operators-pclrq\" (UID: \"4ae16cda-c070-41a7-be09-59ad28b25d0e\") " pod="openshift-marketplace/community-operators-pclrq" Jan 28 10:14:27 crc kubenswrapper[4735]: I0128 10:14:27.750857 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae16cda-c070-41a7-be09-59ad28b25d0e-catalog-content\") pod \"community-operators-pclrq\" (UID: \"4ae16cda-c070-41a7-be09-59ad28b25d0e\") " pod="openshift-marketplace/community-operators-pclrq" Jan 28 10:14:27 crc kubenswrapper[4735]: I0128 10:14:27.750932 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae16cda-c070-41a7-be09-59ad28b25d0e-utilities\") pod \"community-operators-pclrq\" (UID: \"4ae16cda-c070-41a7-be09-59ad28b25d0e\") " pod="openshift-marketplace/community-operators-pclrq" Jan 28 10:14:27 crc kubenswrapper[4735]: I0128 10:14:27.751013 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvvzk\" (UniqueName: \"kubernetes.io/projected/4ae16cda-c070-41a7-be09-59ad28b25d0e-kube-api-access-pvvzk\") pod \"community-operators-pclrq\" (UID: \"4ae16cda-c070-41a7-be09-59ad28b25d0e\") " pod="openshift-marketplace/community-operators-pclrq" Jan 28 10:14:27 crc kubenswrapper[4735]: I0128 10:14:27.751454 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae16cda-c070-41a7-be09-59ad28b25d0e-utilities\") pod \"community-operators-pclrq\" (UID: \"4ae16cda-c070-41a7-be09-59ad28b25d0e\") " pod="openshift-marketplace/community-operators-pclrq" Jan 28 10:14:27 crc kubenswrapper[4735]: I0128 10:14:27.751452 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae16cda-c070-41a7-be09-59ad28b25d0e-catalog-content\") pod \"community-operators-pclrq\" (UID: \"4ae16cda-c070-41a7-be09-59ad28b25d0e\") " pod="openshift-marketplace/community-operators-pclrq" Jan 28 10:14:27 crc kubenswrapper[4735]: I0128 10:14:27.772461 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvvzk\" (UniqueName: \"kubernetes.io/projected/4ae16cda-c070-41a7-be09-59ad28b25d0e-kube-api-access-pvvzk\") pod \"community-operators-pclrq\" (UID: \"4ae16cda-c070-41a7-be09-59ad28b25d0e\") " pod="openshift-marketplace/community-operators-pclrq" Jan 28 10:14:27 crc kubenswrapper[4735]: I0128 10:14:27.889835 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pclrq" Jan 28 10:14:28 crc kubenswrapper[4735]: I0128 10:14:28.363118 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9scs8" event={"ID":"502c580a-1fff-417c-ac48-b6f1008b85df","Type":"ContainerStarted","Data":"5e5a165407877eb05b121bbb5566a7462ad1d3f8480f5d5211b29b32d8b272f7"} Jan 28 10:14:28 crc kubenswrapper[4735]: I0128 10:14:28.445363 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pclrq"] Jan 28 10:14:28 crc kubenswrapper[4735]: E0128 10:14:28.852309 4735 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ae16cda_c070_41a7_be09_59ad28b25d0e.slice/crio-56752958c8b4a4a27e09d474edca3b7f1f7cc48c62e2a8667e69a8c74d4e2397.scope\": RecentStats: unable to find data in memory cache]" Jan 28 10:14:29 crc kubenswrapper[4735]: I0128 10:14:29.373454 4735 generic.go:334] "Generic (PLEG): container finished" podID="4ae16cda-c070-41a7-be09-59ad28b25d0e" containerID="56752958c8b4a4a27e09d474edca3b7f1f7cc48c62e2a8667e69a8c74d4e2397" exitCode=0 Jan 28 10:14:29 crc kubenswrapper[4735]: I0128 10:14:29.373524 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pclrq" event={"ID":"4ae16cda-c070-41a7-be09-59ad28b25d0e","Type":"ContainerDied","Data":"56752958c8b4a4a27e09d474edca3b7f1f7cc48c62e2a8667e69a8c74d4e2397"} Jan 28 10:14:29 crc kubenswrapper[4735]: I0128 10:14:29.373580 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pclrq" event={"ID":"4ae16cda-c070-41a7-be09-59ad28b25d0e","Type":"ContainerStarted","Data":"0893fe415ac21a1cf7d97f2714155401685b395c456946155691e0b118be3429"} Jan 28 10:14:30 crc kubenswrapper[4735]: I0128 10:14:30.384241 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pclrq" event={"ID":"4ae16cda-c070-41a7-be09-59ad28b25d0e","Type":"ContainerStarted","Data":"bd8a34a9da7b69aba31e716979074ae5c57406174c4d1c7e12508fbec47be04d"} Jan 28 10:14:31 crc kubenswrapper[4735]: I0128 10:14:31.394124 4735 generic.go:334] "Generic (PLEG): container finished" podID="4ae16cda-c070-41a7-be09-59ad28b25d0e" containerID="bd8a34a9da7b69aba31e716979074ae5c57406174c4d1c7e12508fbec47be04d" exitCode=0 Jan 28 10:14:31 crc kubenswrapper[4735]: I0128 10:14:31.394175 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pclrq" event={"ID":"4ae16cda-c070-41a7-be09-59ad28b25d0e","Type":"ContainerDied","Data":"bd8a34a9da7b69aba31e716979074ae5c57406174c4d1c7e12508fbec47be04d"} Jan 28 10:14:33 crc kubenswrapper[4735]: I0128 10:14:33.414343 4735 generic.go:334] "Generic (PLEG): container finished" podID="812becf2-d241-43e6-b9e4-58263bbfb115" containerID="ed805e848e9439bd29341f9e67c316772cb5b1e79fe149acf0bb65c825e5ccb7" exitCode=0 Jan 28 10:14:33 crc kubenswrapper[4735]: I0128 10:14:33.414447 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" event={"ID":"812becf2-d241-43e6-b9e4-58263bbfb115","Type":"ContainerDied","Data":"ed805e848e9439bd29341f9e67c316772cb5b1e79fe149acf0bb65c825e5ccb7"} Jan 28 10:14:34 crc kubenswrapper[4735]: I0128 10:14:34.426858 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pclrq" event={"ID":"4ae16cda-c070-41a7-be09-59ad28b25d0e","Type":"ContainerStarted","Data":"13cfddb08b0b89a1b0f36787cdb673071ecc2f1a7215a0ac9f1cd7ee170ab35e"} Jan 28 10:14:34 crc kubenswrapper[4735]: I0128 10:14:34.460533 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pclrq" podStartSLOduration=3.637264963 podStartE2EDuration="7.460514747s" podCreationTimestamp="2026-01-28 10:14:27 +0000 UTC" firstStartedPulling="2026-01-28 10:14:29.376468904 +0000 UTC m=+1922.526133277" lastFinishedPulling="2026-01-28 10:14:33.199718688 +0000 UTC m=+1926.349383061" observedRunningTime="2026-01-28 10:14:34.44788222 +0000 UTC m=+1927.597546593" watchObservedRunningTime="2026-01-28 10:14:34.460514747 +0000 UTC m=+1927.610179120" Jan 28 10:14:34 crc kubenswrapper[4735]: I0128 10:14:34.848465 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" Jan 28 10:14:34 crc kubenswrapper[4735]: I0128 10:14:34.998791 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/812becf2-d241-43e6-b9e4-58263bbfb115-inventory-0\") pod \"812becf2-d241-43e6-b9e4-58263bbfb115\" (UID: \"812becf2-d241-43e6-b9e4-58263bbfb115\") " Jan 28 10:14:34 crc kubenswrapper[4735]: I0128 10:14:34.998907 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/812becf2-d241-43e6-b9e4-58263bbfb115-ssh-key-openstack-edpm-ipam\") pod \"812becf2-d241-43e6-b9e4-58263bbfb115\" (UID: \"812becf2-d241-43e6-b9e4-58263bbfb115\") " Jan 28 10:14:34 crc kubenswrapper[4735]: I0128 10:14:34.998952 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7whk\" (UniqueName: \"kubernetes.io/projected/812becf2-d241-43e6-b9e4-58263bbfb115-kube-api-access-h7whk\") pod \"812becf2-d241-43e6-b9e4-58263bbfb115\" (UID: \"812becf2-d241-43e6-b9e4-58263bbfb115\") " Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.008694 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/812becf2-d241-43e6-b9e4-58263bbfb115-kube-api-access-h7whk" (OuterVolumeSpecName: "kube-api-access-h7whk") pod "812becf2-d241-43e6-b9e4-58263bbfb115" (UID: "812becf2-d241-43e6-b9e4-58263bbfb115"). InnerVolumeSpecName "kube-api-access-h7whk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.028396 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/812becf2-d241-43e6-b9e4-58263bbfb115-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "812becf2-d241-43e6-b9e4-58263bbfb115" (UID: "812becf2-d241-43e6-b9e4-58263bbfb115"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.032985 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/812becf2-d241-43e6-b9e4-58263bbfb115-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "812becf2-d241-43e6-b9e4-58263bbfb115" (UID: "812becf2-d241-43e6-b9e4-58263bbfb115"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.101482 4735 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/812becf2-d241-43e6-b9e4-58263bbfb115-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.101520 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/812becf2-d241-43e6-b9e4-58263bbfb115-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.101532 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7whk\" (UniqueName: \"kubernetes.io/projected/812becf2-d241-43e6-b9e4-58263bbfb115-kube-api-access-h7whk\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.435433 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" event={"ID":"812becf2-d241-43e6-b9e4-58263bbfb115","Type":"ContainerDied","Data":"292c6006fd44d2213cb1b92c39caaf87fa0f67acb071a629c40aeb23b03c4045"} Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.435512 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="292c6006fd44d2213cb1b92c39caaf87fa0f67acb071a629c40aeb23b03c4045" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.435456 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5tc5h" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.514210 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9"] Jan 28 10:14:35 crc kubenswrapper[4735]: E0128 10:14:35.514622 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812becf2-d241-43e6-b9e4-58263bbfb115" containerName="ssh-known-hosts-edpm-deployment" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.514644 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="812becf2-d241-43e6-b9e4-58263bbfb115" containerName="ssh-known-hosts-edpm-deployment" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.514884 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="812becf2-d241-43e6-b9e4-58263bbfb115" containerName="ssh-known-hosts-edpm-deployment" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.515572 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.517726 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.518054 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.518201 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.518393 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j996r" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.529768 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9"] Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.611655 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p7fn9\" (UID: \"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.611917 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9z49\" (UniqueName: \"kubernetes.io/projected/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-kube-api-access-h9z49\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p7fn9\" (UID: \"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.612395 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p7fn9\" (UID: \"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.714294 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p7fn9\" (UID: \"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.714672 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9z49\" (UniqueName: \"kubernetes.io/projected/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-kube-api-access-h9z49\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p7fn9\" (UID: \"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.714893 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p7fn9\" (UID: \"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.718997 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p7fn9\" (UID: \"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.720399 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p7fn9\" (UID: \"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.733005 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9z49\" (UniqueName: \"kubernetes.io/projected/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-kube-api-access-h9z49\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p7fn9\" (UID: \"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" Jan 28 10:14:35 crc kubenswrapper[4735]: I0128 10:14:35.848774 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" Jan 28 10:14:36 crc kubenswrapper[4735]: I0128 10:14:36.463262 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9"] Jan 28 10:14:37 crc kubenswrapper[4735]: I0128 10:14:37.452773 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" event={"ID":"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70","Type":"ContainerStarted","Data":"f4ffc071641417b6331aefc3a14a087aff218d2ecff40afbf8d1384ad0c8f452"} Jan 28 10:14:37 crc kubenswrapper[4735]: I0128 10:14:37.889980 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pclrq" Jan 28 10:14:37 crc kubenswrapper[4735]: I0128 10:14:37.890292 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pclrq" Jan 28 10:14:37 crc kubenswrapper[4735]: I0128 10:14:37.934235 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pclrq" Jan 28 10:14:38 crc kubenswrapper[4735]: I0128 10:14:38.461932 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" event={"ID":"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70","Type":"ContainerStarted","Data":"05c3ecd87e95c61410127625321f624ab151f876accb8a00b5f795fc9f07c6e9"} Jan 28 10:14:38 crc kubenswrapper[4735]: I0128 10:14:38.484693 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" podStartSLOduration=1.9767681860000001 podStartE2EDuration="3.484672513s" podCreationTimestamp="2026-01-28 10:14:35 +0000 UTC" firstStartedPulling="2026-01-28 10:14:36.469947886 +0000 UTC m=+1929.619612259" lastFinishedPulling="2026-01-28 10:14:37.977852213 +0000 UTC m=+1931.127516586" observedRunningTime="2026-01-28 10:14:38.482709676 +0000 UTC m=+1931.632374049" watchObservedRunningTime="2026-01-28 10:14:38.484672513 +0000 UTC m=+1931.634336886" Jan 28 10:14:38 crc kubenswrapper[4735]: I0128 10:14:38.517462 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pclrq" Jan 28 10:14:38 crc kubenswrapper[4735]: I0128 10:14:38.570983 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pclrq"] Jan 28 10:14:40 crc kubenswrapper[4735]: I0128 10:14:40.845634 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pclrq" podUID="4ae16cda-c070-41a7-be09-59ad28b25d0e" containerName="registry-server" containerID="cri-o://13cfddb08b0b89a1b0f36787cdb673071ecc2f1a7215a0ac9f1cd7ee170ab35e" gracePeriod=2 Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.283843 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pclrq" Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.334861 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvvzk\" (UniqueName: \"kubernetes.io/projected/4ae16cda-c070-41a7-be09-59ad28b25d0e-kube-api-access-pvvzk\") pod \"4ae16cda-c070-41a7-be09-59ad28b25d0e\" (UID: \"4ae16cda-c070-41a7-be09-59ad28b25d0e\") " Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.334997 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae16cda-c070-41a7-be09-59ad28b25d0e-catalog-content\") pod \"4ae16cda-c070-41a7-be09-59ad28b25d0e\" (UID: \"4ae16cda-c070-41a7-be09-59ad28b25d0e\") " Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.335084 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae16cda-c070-41a7-be09-59ad28b25d0e-utilities\") pod \"4ae16cda-c070-41a7-be09-59ad28b25d0e\" (UID: \"4ae16cda-c070-41a7-be09-59ad28b25d0e\") " Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.335806 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ae16cda-c070-41a7-be09-59ad28b25d0e-utilities" (OuterVolumeSpecName: "utilities") pod "4ae16cda-c070-41a7-be09-59ad28b25d0e" (UID: "4ae16cda-c070-41a7-be09-59ad28b25d0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.340074 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ae16cda-c070-41a7-be09-59ad28b25d0e-kube-api-access-pvvzk" (OuterVolumeSpecName: "kube-api-access-pvvzk") pod "4ae16cda-c070-41a7-be09-59ad28b25d0e" (UID: "4ae16cda-c070-41a7-be09-59ad28b25d0e"). InnerVolumeSpecName "kube-api-access-pvvzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.395658 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ae16cda-c070-41a7-be09-59ad28b25d0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ae16cda-c070-41a7-be09-59ad28b25d0e" (UID: "4ae16cda-c070-41a7-be09-59ad28b25d0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.437930 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvvzk\" (UniqueName: \"kubernetes.io/projected/4ae16cda-c070-41a7-be09-59ad28b25d0e-kube-api-access-pvvzk\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.438162 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae16cda-c070-41a7-be09-59ad28b25d0e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.438220 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae16cda-c070-41a7-be09-59ad28b25d0e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.856166 4735 generic.go:334] "Generic (PLEG): container finished" podID="4ae16cda-c070-41a7-be09-59ad28b25d0e" containerID="13cfddb08b0b89a1b0f36787cdb673071ecc2f1a7215a0ac9f1cd7ee170ab35e" exitCode=0 Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.856346 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pclrq" event={"ID":"4ae16cda-c070-41a7-be09-59ad28b25d0e","Type":"ContainerDied","Data":"13cfddb08b0b89a1b0f36787cdb673071ecc2f1a7215a0ac9f1cd7ee170ab35e"} Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.856568 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pclrq" event={"ID":"4ae16cda-c070-41a7-be09-59ad28b25d0e","Type":"ContainerDied","Data":"0893fe415ac21a1cf7d97f2714155401685b395c456946155691e0b118be3429"} Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.856599 4735 scope.go:117] "RemoveContainer" containerID="13cfddb08b0b89a1b0f36787cdb673071ecc2f1a7215a0ac9f1cd7ee170ab35e" Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.857863 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pclrq" Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.860063 4735 generic.go:334] "Generic (PLEG): container finished" podID="502c580a-1fff-417c-ac48-b6f1008b85df" containerID="5e5a165407877eb05b121bbb5566a7462ad1d3f8480f5d5211b29b32d8b272f7" exitCode=0 Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.860129 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9scs8" event={"ID":"502c580a-1fff-417c-ac48-b6f1008b85df","Type":"ContainerDied","Data":"5e5a165407877eb05b121bbb5566a7462ad1d3f8480f5d5211b29b32d8b272f7"} Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.883557 4735 scope.go:117] "RemoveContainer" containerID="bd8a34a9da7b69aba31e716979074ae5c57406174c4d1c7e12508fbec47be04d" Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.920177 4735 scope.go:117] "RemoveContainer" containerID="56752958c8b4a4a27e09d474edca3b7f1f7cc48c62e2a8667e69a8c74d4e2397" Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.923691 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pclrq"] Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.932656 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pclrq"] Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.972433 4735 scope.go:117] "RemoveContainer" containerID="13cfddb08b0b89a1b0f36787cdb673071ecc2f1a7215a0ac9f1cd7ee170ab35e" Jan 28 10:14:41 crc kubenswrapper[4735]: E0128 10:14:41.975928 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13cfddb08b0b89a1b0f36787cdb673071ecc2f1a7215a0ac9f1cd7ee170ab35e\": container with ID starting with 13cfddb08b0b89a1b0f36787cdb673071ecc2f1a7215a0ac9f1cd7ee170ab35e not found: ID does not exist" containerID="13cfddb08b0b89a1b0f36787cdb673071ecc2f1a7215a0ac9f1cd7ee170ab35e" Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.975983 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13cfddb08b0b89a1b0f36787cdb673071ecc2f1a7215a0ac9f1cd7ee170ab35e"} err="failed to get container status \"13cfddb08b0b89a1b0f36787cdb673071ecc2f1a7215a0ac9f1cd7ee170ab35e\": rpc error: code = NotFound desc = could not find container \"13cfddb08b0b89a1b0f36787cdb673071ecc2f1a7215a0ac9f1cd7ee170ab35e\": container with ID starting with 13cfddb08b0b89a1b0f36787cdb673071ecc2f1a7215a0ac9f1cd7ee170ab35e not found: ID does not exist" Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.976015 4735 scope.go:117] "RemoveContainer" containerID="bd8a34a9da7b69aba31e716979074ae5c57406174c4d1c7e12508fbec47be04d" Jan 28 10:14:41 crc kubenswrapper[4735]: E0128 10:14:41.980236 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd8a34a9da7b69aba31e716979074ae5c57406174c4d1c7e12508fbec47be04d\": container with ID starting with bd8a34a9da7b69aba31e716979074ae5c57406174c4d1c7e12508fbec47be04d not found: ID does not exist" containerID="bd8a34a9da7b69aba31e716979074ae5c57406174c4d1c7e12508fbec47be04d" Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.980263 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd8a34a9da7b69aba31e716979074ae5c57406174c4d1c7e12508fbec47be04d"} err="failed to get container status \"bd8a34a9da7b69aba31e716979074ae5c57406174c4d1c7e12508fbec47be04d\": rpc error: code = NotFound desc = could not find container \"bd8a34a9da7b69aba31e716979074ae5c57406174c4d1c7e12508fbec47be04d\": container with ID starting with bd8a34a9da7b69aba31e716979074ae5c57406174c4d1c7e12508fbec47be04d not found: ID does not exist" Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.980279 4735 scope.go:117] "RemoveContainer" containerID="56752958c8b4a4a27e09d474edca3b7f1f7cc48c62e2a8667e69a8c74d4e2397" Jan 28 10:14:41 crc kubenswrapper[4735]: E0128 10:14:41.980585 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56752958c8b4a4a27e09d474edca3b7f1f7cc48c62e2a8667e69a8c74d4e2397\": container with ID starting with 56752958c8b4a4a27e09d474edca3b7f1f7cc48c62e2a8667e69a8c74d4e2397 not found: ID does not exist" containerID="56752958c8b4a4a27e09d474edca3b7f1f7cc48c62e2a8667e69a8c74d4e2397" Jan 28 10:14:41 crc kubenswrapper[4735]: I0128 10:14:41.980613 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56752958c8b4a4a27e09d474edca3b7f1f7cc48c62e2a8667e69a8c74d4e2397"} err="failed to get container status \"56752958c8b4a4a27e09d474edca3b7f1f7cc48c62e2a8667e69a8c74d4e2397\": rpc error: code = NotFound desc = could not find container \"56752958c8b4a4a27e09d474edca3b7f1f7cc48c62e2a8667e69a8c74d4e2397\": container with ID starting with 56752958c8b4a4a27e09d474edca3b7f1f7cc48c62e2a8667e69a8c74d4e2397 not found: ID does not exist" Jan 28 10:14:42 crc kubenswrapper[4735]: I0128 10:14:42.871631 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9scs8" event={"ID":"502c580a-1fff-417c-ac48-b6f1008b85df","Type":"ContainerStarted","Data":"321e981fa6305be6aef147ec136187ee72ff06bbd7267d80efcce1d9a158978f"} Jan 28 10:14:42 crc kubenswrapper[4735]: I0128 10:14:42.890305 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9scs8" podStartSLOduration=2.941575395 podStartE2EDuration="18.890276018s" podCreationTimestamp="2026-01-28 10:14:24 +0000 UTC" firstStartedPulling="2026-01-28 10:14:26.345340137 +0000 UTC m=+1919.495004510" lastFinishedPulling="2026-01-28 10:14:42.29404076 +0000 UTC m=+1935.443705133" observedRunningTime="2026-01-28 10:14:42.888635858 +0000 UTC m=+1936.038300231" watchObservedRunningTime="2026-01-28 10:14:42.890276018 +0000 UTC m=+1936.039940391" Jan 28 10:14:43 crc kubenswrapper[4735]: I0128 10:14:43.516370 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ae16cda-c070-41a7-be09-59ad28b25d0e" path="/var/lib/kubelet/pods/4ae16cda-c070-41a7-be09-59ad28b25d0e/volumes" Jan 28 10:14:44 crc kubenswrapper[4735]: I0128 10:14:44.874930 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9scs8" Jan 28 10:14:44 crc kubenswrapper[4735]: I0128 10:14:44.875282 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9scs8" Jan 28 10:14:45 crc kubenswrapper[4735]: I0128 10:14:45.922938 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9scs8" podUID="502c580a-1fff-417c-ac48-b6f1008b85df" containerName="registry-server" probeResult="failure" output=< Jan 28 10:14:45 crc kubenswrapper[4735]: timeout: failed to connect service ":50051" within 1s Jan 28 10:14:45 crc kubenswrapper[4735]: > Jan 28 10:14:47 crc kubenswrapper[4735]: I0128 10:14:47.967873 4735 generic.go:334] "Generic (PLEG): container finished" podID="2e5e3a3d-adbc-42b9-a436-a3b565cfdb70" containerID="05c3ecd87e95c61410127625321f624ab151f876accb8a00b5f795fc9f07c6e9" exitCode=0 Jan 28 10:14:47 crc kubenswrapper[4735]: I0128 10:14:47.968012 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" event={"ID":"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70","Type":"ContainerDied","Data":"05c3ecd87e95c61410127625321f624ab151f876accb8a00b5f795fc9f07c6e9"} Jan 28 10:14:49 crc kubenswrapper[4735]: I0128 10:14:49.441243 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" Jan 28 10:14:49 crc kubenswrapper[4735]: I0128 10:14:49.594159 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9z49\" (UniqueName: \"kubernetes.io/projected/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-kube-api-access-h9z49\") pod \"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70\" (UID: \"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70\") " Jan 28 10:14:49 crc kubenswrapper[4735]: I0128 10:14:49.594313 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-ssh-key-openstack-edpm-ipam\") pod \"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70\" (UID: \"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70\") " Jan 28 10:14:49 crc kubenswrapper[4735]: I0128 10:14:49.594854 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-inventory\") pod \"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70\" (UID: \"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70\") " Jan 28 10:14:49 crc kubenswrapper[4735]: I0128 10:14:49.608762 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-kube-api-access-h9z49" (OuterVolumeSpecName: "kube-api-access-h9z49") pod "2e5e3a3d-adbc-42b9-a436-a3b565cfdb70" (UID: "2e5e3a3d-adbc-42b9-a436-a3b565cfdb70"). InnerVolumeSpecName "kube-api-access-h9z49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:14:49 crc kubenswrapper[4735]: I0128 10:14:49.631226 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-inventory" (OuterVolumeSpecName: "inventory") pod "2e5e3a3d-adbc-42b9-a436-a3b565cfdb70" (UID: "2e5e3a3d-adbc-42b9-a436-a3b565cfdb70"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:14:49 crc kubenswrapper[4735]: I0128 10:14:49.631412 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2e5e3a3d-adbc-42b9-a436-a3b565cfdb70" (UID: "2e5e3a3d-adbc-42b9-a436-a3b565cfdb70"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:14:49 crc kubenswrapper[4735]: I0128 10:14:49.697399 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9z49\" (UniqueName: \"kubernetes.io/projected/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-kube-api-access-h9z49\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:49 crc kubenswrapper[4735]: I0128 10:14:49.697436 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:49 crc kubenswrapper[4735]: I0128 10:14:49.697446 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e5e3a3d-adbc-42b9-a436-a3b565cfdb70-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:49 crc kubenswrapper[4735]: I0128 10:14:49.987472 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" event={"ID":"2e5e3a3d-adbc-42b9-a436-a3b565cfdb70","Type":"ContainerDied","Data":"f4ffc071641417b6331aefc3a14a087aff218d2ecff40afbf8d1384ad0c8f452"} Jan 28 10:14:49 crc kubenswrapper[4735]: I0128 10:14:49.987528 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p7fn9" Jan 28 10:14:49 crc kubenswrapper[4735]: I0128 10:14:49.987525 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4ffc071641417b6331aefc3a14a087aff218d2ecff40afbf8d1384ad0c8f452" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.081973 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc"] Jan 28 10:14:50 crc kubenswrapper[4735]: E0128 10:14:50.082513 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae16cda-c070-41a7-be09-59ad28b25d0e" containerName="extract-content" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.082537 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae16cda-c070-41a7-be09-59ad28b25d0e" containerName="extract-content" Jan 28 10:14:50 crc kubenswrapper[4735]: E0128 10:14:50.082549 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae16cda-c070-41a7-be09-59ad28b25d0e" containerName="registry-server" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.082557 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae16cda-c070-41a7-be09-59ad28b25d0e" containerName="registry-server" Jan 28 10:14:50 crc kubenswrapper[4735]: E0128 10:14:50.082572 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5e3a3d-adbc-42b9-a436-a3b565cfdb70" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.082581 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5e3a3d-adbc-42b9-a436-a3b565cfdb70" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 10:14:50 crc kubenswrapper[4735]: E0128 10:14:50.082603 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae16cda-c070-41a7-be09-59ad28b25d0e" containerName="extract-utilities" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.082611 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae16cda-c070-41a7-be09-59ad28b25d0e" containerName="extract-utilities" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.082852 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e5e3a3d-adbc-42b9-a436-a3b565cfdb70" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.082872 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae16cda-c070-41a7-be09-59ad28b25d0e" containerName="registry-server" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.083619 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.086404 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.086884 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.087046 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j996r" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.087080 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.103450 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc"] Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.113385 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc\" (UID: \"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.113477 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8ln4\" (UniqueName: \"kubernetes.io/projected/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-kube-api-access-z8ln4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc\" (UID: \"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.113556 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc\" (UID: \"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.214883 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8ln4\" (UniqueName: \"kubernetes.io/projected/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-kube-api-access-z8ln4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc\" (UID: \"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.215227 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc\" (UID: \"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.215376 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc\" (UID: \"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.222928 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc\" (UID: \"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.229594 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc\" (UID: \"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.235870 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8ln4\" (UniqueName: \"kubernetes.io/projected/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-kube-api-access-z8ln4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc\" (UID: \"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.401526 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.961336 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc"] Jan 28 10:14:50 crc kubenswrapper[4735]: I0128 10:14:50.997349 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" event={"ID":"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1","Type":"ContainerStarted","Data":"6fe26c9215ba8dda8f67c98472f78216e0742588874295eeb0c74f6909ad2073"} Jan 28 10:14:51 crc kubenswrapper[4735]: I0128 10:14:51.189198 4735 scope.go:117] "RemoveContainer" containerID="2261bd57f64694e6e287a80d787d85735187d87a17b18b03b8acb5b80346e30c" Jan 28 10:14:53 crc kubenswrapper[4735]: I0128 10:14:53.015914 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" event={"ID":"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1","Type":"ContainerStarted","Data":"058d53c26778b2389f5aab73bff9287590dadd89a35425372974383367398792"} Jan 28 10:14:53 crc kubenswrapper[4735]: I0128 10:14:53.039620 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" podStartSLOduration=2.229387 podStartE2EDuration="3.039600534s" podCreationTimestamp="2026-01-28 10:14:50 +0000 UTC" firstStartedPulling="2026-01-28 10:14:50.96578353 +0000 UTC m=+1944.115447903" lastFinishedPulling="2026-01-28 10:14:51.775997064 +0000 UTC m=+1944.925661437" observedRunningTime="2026-01-28 10:14:53.032406488 +0000 UTC m=+1946.182070891" watchObservedRunningTime="2026-01-28 10:14:53.039600534 +0000 UTC m=+1946.189264907" Jan 28 10:14:54 crc kubenswrapper[4735]: I0128 10:14:54.959999 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9scs8" Jan 28 10:14:55 crc kubenswrapper[4735]: I0128 10:14:55.013928 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9scs8" Jan 28 10:14:55 crc kubenswrapper[4735]: I0128 10:14:55.726714 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9scs8"] Jan 28 10:14:56 crc kubenswrapper[4735]: I0128 10:14:56.058143 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9scs8" podUID="502c580a-1fff-417c-ac48-b6f1008b85df" containerName="registry-server" containerID="cri-o://321e981fa6305be6aef147ec136187ee72ff06bbd7267d80efcce1d9a158978f" gracePeriod=2 Jan 28 10:14:56 crc kubenswrapper[4735]: I0128 10:14:56.514966 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9scs8" Jan 28 10:14:56 crc kubenswrapper[4735]: I0128 10:14:56.655125 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7msh7\" (UniqueName: \"kubernetes.io/projected/502c580a-1fff-417c-ac48-b6f1008b85df-kube-api-access-7msh7\") pod \"502c580a-1fff-417c-ac48-b6f1008b85df\" (UID: \"502c580a-1fff-417c-ac48-b6f1008b85df\") " Jan 28 10:14:56 crc kubenswrapper[4735]: I0128 10:14:56.655268 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502c580a-1fff-417c-ac48-b6f1008b85df-utilities\") pod \"502c580a-1fff-417c-ac48-b6f1008b85df\" (UID: \"502c580a-1fff-417c-ac48-b6f1008b85df\") " Jan 28 10:14:56 crc kubenswrapper[4735]: I0128 10:14:56.655360 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502c580a-1fff-417c-ac48-b6f1008b85df-catalog-content\") pod \"502c580a-1fff-417c-ac48-b6f1008b85df\" (UID: \"502c580a-1fff-417c-ac48-b6f1008b85df\") " Jan 28 10:14:56 crc kubenswrapper[4735]: I0128 10:14:56.656165 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/502c580a-1fff-417c-ac48-b6f1008b85df-utilities" (OuterVolumeSpecName: "utilities") pod "502c580a-1fff-417c-ac48-b6f1008b85df" (UID: "502c580a-1fff-417c-ac48-b6f1008b85df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:14:56 crc kubenswrapper[4735]: I0128 10:14:56.660493 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/502c580a-1fff-417c-ac48-b6f1008b85df-kube-api-access-7msh7" (OuterVolumeSpecName: "kube-api-access-7msh7") pod "502c580a-1fff-417c-ac48-b6f1008b85df" (UID: "502c580a-1fff-417c-ac48-b6f1008b85df"). InnerVolumeSpecName "kube-api-access-7msh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:14:56 crc kubenswrapper[4735]: I0128 10:14:56.757530 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7msh7\" (UniqueName: \"kubernetes.io/projected/502c580a-1fff-417c-ac48-b6f1008b85df-kube-api-access-7msh7\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:56 crc kubenswrapper[4735]: I0128 10:14:56.757567 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502c580a-1fff-417c-ac48-b6f1008b85df-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:56 crc kubenswrapper[4735]: I0128 10:14:56.798963 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/502c580a-1fff-417c-ac48-b6f1008b85df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "502c580a-1fff-417c-ac48-b6f1008b85df" (UID: "502c580a-1fff-417c-ac48-b6f1008b85df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:14:56 crc kubenswrapper[4735]: I0128 10:14:56.859303 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502c580a-1fff-417c-ac48-b6f1008b85df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:14:57 crc kubenswrapper[4735]: I0128 10:14:57.070945 4735 generic.go:334] "Generic (PLEG): container finished" podID="502c580a-1fff-417c-ac48-b6f1008b85df" containerID="321e981fa6305be6aef147ec136187ee72ff06bbd7267d80efcce1d9a158978f" exitCode=0 Jan 28 10:14:57 crc kubenswrapper[4735]: I0128 10:14:57.070992 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9scs8" event={"ID":"502c580a-1fff-417c-ac48-b6f1008b85df","Type":"ContainerDied","Data":"321e981fa6305be6aef147ec136187ee72ff06bbd7267d80efcce1d9a158978f"} Jan 28 10:14:57 crc kubenswrapper[4735]: I0128 10:14:57.071018 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9scs8" event={"ID":"502c580a-1fff-417c-ac48-b6f1008b85df","Type":"ContainerDied","Data":"17e7621ae806a83760a6b124afc7cb9cc17fa6d5e1073a70b8222e742f91cf1a"} Jan 28 10:14:57 crc kubenswrapper[4735]: I0128 10:14:57.071040 4735 scope.go:117] "RemoveContainer" containerID="321e981fa6305be6aef147ec136187ee72ff06bbd7267d80efcce1d9a158978f" Jan 28 10:14:57 crc kubenswrapper[4735]: I0128 10:14:57.071279 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9scs8" Jan 28 10:14:57 crc kubenswrapper[4735]: I0128 10:14:57.099048 4735 scope.go:117] "RemoveContainer" containerID="5e5a165407877eb05b121bbb5566a7462ad1d3f8480f5d5211b29b32d8b272f7" Jan 28 10:14:57 crc kubenswrapper[4735]: I0128 10:14:57.107888 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9scs8"] Jan 28 10:14:57 crc kubenswrapper[4735]: I0128 10:14:57.117188 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9scs8"] Jan 28 10:14:57 crc kubenswrapper[4735]: I0128 10:14:57.124399 4735 scope.go:117] "RemoveContainer" containerID="fbc63208cdc07ef965a9ceac15045709d786a0584a02e70a77060759715f8e67" Jan 28 10:14:57 crc kubenswrapper[4735]: I0128 10:14:57.164620 4735 scope.go:117] "RemoveContainer" containerID="321e981fa6305be6aef147ec136187ee72ff06bbd7267d80efcce1d9a158978f" Jan 28 10:14:57 crc kubenswrapper[4735]: E0128 10:14:57.165205 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"321e981fa6305be6aef147ec136187ee72ff06bbd7267d80efcce1d9a158978f\": container with ID starting with 321e981fa6305be6aef147ec136187ee72ff06bbd7267d80efcce1d9a158978f not found: ID does not exist" containerID="321e981fa6305be6aef147ec136187ee72ff06bbd7267d80efcce1d9a158978f" Jan 28 10:14:57 crc kubenswrapper[4735]: I0128 10:14:57.165236 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"321e981fa6305be6aef147ec136187ee72ff06bbd7267d80efcce1d9a158978f"} err="failed to get container status \"321e981fa6305be6aef147ec136187ee72ff06bbd7267d80efcce1d9a158978f\": rpc error: code = NotFound desc = could not find container \"321e981fa6305be6aef147ec136187ee72ff06bbd7267d80efcce1d9a158978f\": container with ID starting with 321e981fa6305be6aef147ec136187ee72ff06bbd7267d80efcce1d9a158978f not found: ID does not exist" Jan 28 10:14:57 crc kubenswrapper[4735]: I0128 10:14:57.165264 4735 scope.go:117] "RemoveContainer" containerID="5e5a165407877eb05b121bbb5566a7462ad1d3f8480f5d5211b29b32d8b272f7" Jan 28 10:14:57 crc kubenswrapper[4735]: E0128 10:14:57.165473 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e5a165407877eb05b121bbb5566a7462ad1d3f8480f5d5211b29b32d8b272f7\": container with ID starting with 5e5a165407877eb05b121bbb5566a7462ad1d3f8480f5d5211b29b32d8b272f7 not found: ID does not exist" containerID="5e5a165407877eb05b121bbb5566a7462ad1d3f8480f5d5211b29b32d8b272f7" Jan 28 10:14:57 crc kubenswrapper[4735]: I0128 10:14:57.165497 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e5a165407877eb05b121bbb5566a7462ad1d3f8480f5d5211b29b32d8b272f7"} err="failed to get container status \"5e5a165407877eb05b121bbb5566a7462ad1d3f8480f5d5211b29b32d8b272f7\": rpc error: code = NotFound desc = could not find container \"5e5a165407877eb05b121bbb5566a7462ad1d3f8480f5d5211b29b32d8b272f7\": container with ID starting with 5e5a165407877eb05b121bbb5566a7462ad1d3f8480f5d5211b29b32d8b272f7 not found: ID does not exist" Jan 28 10:14:57 crc kubenswrapper[4735]: I0128 10:14:57.165510 4735 scope.go:117] "RemoveContainer" containerID="fbc63208cdc07ef965a9ceac15045709d786a0584a02e70a77060759715f8e67" Jan 28 10:14:57 crc kubenswrapper[4735]: E0128 10:14:57.165688 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc63208cdc07ef965a9ceac15045709d786a0584a02e70a77060759715f8e67\": container with ID starting with fbc63208cdc07ef965a9ceac15045709d786a0584a02e70a77060759715f8e67 not found: ID does not exist" containerID="fbc63208cdc07ef965a9ceac15045709d786a0584a02e70a77060759715f8e67" Jan 28 10:14:57 crc kubenswrapper[4735]: I0128 10:14:57.165708 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc63208cdc07ef965a9ceac15045709d786a0584a02e70a77060759715f8e67"} err="failed to get container status \"fbc63208cdc07ef965a9ceac15045709d786a0584a02e70a77060759715f8e67\": rpc error: code = NotFound desc = could not find container \"fbc63208cdc07ef965a9ceac15045709d786a0584a02e70a77060759715f8e67\": container with ID starting with fbc63208cdc07ef965a9ceac15045709d786a0584a02e70a77060759715f8e67 not found: ID does not exist" Jan 28 10:14:57 crc kubenswrapper[4735]: I0128 10:14:57.514391 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="502c580a-1fff-417c-ac48-b6f1008b85df" path="/var/lib/kubelet/pods/502c580a-1fff-417c-ac48-b6f1008b85df/volumes" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.141041 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt"] Jan 28 10:15:00 crc kubenswrapper[4735]: E0128 10:15:00.142178 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502c580a-1fff-417c-ac48-b6f1008b85df" containerName="registry-server" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.142196 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="502c580a-1fff-417c-ac48-b6f1008b85df" containerName="registry-server" Jan 28 10:15:00 crc kubenswrapper[4735]: E0128 10:15:00.142212 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502c580a-1fff-417c-ac48-b6f1008b85df" containerName="extract-content" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.142219 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="502c580a-1fff-417c-ac48-b6f1008b85df" containerName="extract-content" Jan 28 10:15:00 crc kubenswrapper[4735]: E0128 10:15:00.142236 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502c580a-1fff-417c-ac48-b6f1008b85df" containerName="extract-utilities" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.142244 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="502c580a-1fff-417c-ac48-b6f1008b85df" containerName="extract-utilities" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.142485 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="502c580a-1fff-417c-ac48-b6f1008b85df" containerName="registry-server" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.143155 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.145473 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.148211 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.161528 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt"] Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.327696 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32cf65c7-c629-45c0-936b-23307f9d3ad8-secret-volume\") pod \"collect-profiles-29493255-8pnwt\" (UID: \"32cf65c7-c629-45c0-936b-23307f9d3ad8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.327894 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxtvh\" (UniqueName: \"kubernetes.io/projected/32cf65c7-c629-45c0-936b-23307f9d3ad8-kube-api-access-gxtvh\") pod \"collect-profiles-29493255-8pnwt\" (UID: \"32cf65c7-c629-45c0-936b-23307f9d3ad8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.327955 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32cf65c7-c629-45c0-936b-23307f9d3ad8-config-volume\") pod \"collect-profiles-29493255-8pnwt\" (UID: \"32cf65c7-c629-45c0-936b-23307f9d3ad8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.429805 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32cf65c7-c629-45c0-936b-23307f9d3ad8-secret-volume\") pod \"collect-profiles-29493255-8pnwt\" (UID: \"32cf65c7-c629-45c0-936b-23307f9d3ad8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.429990 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxtvh\" (UniqueName: \"kubernetes.io/projected/32cf65c7-c629-45c0-936b-23307f9d3ad8-kube-api-access-gxtvh\") pod \"collect-profiles-29493255-8pnwt\" (UID: \"32cf65c7-c629-45c0-936b-23307f9d3ad8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.430062 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32cf65c7-c629-45c0-936b-23307f9d3ad8-config-volume\") pod \"collect-profiles-29493255-8pnwt\" (UID: \"32cf65c7-c629-45c0-936b-23307f9d3ad8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.432253 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32cf65c7-c629-45c0-936b-23307f9d3ad8-config-volume\") pod \"collect-profiles-29493255-8pnwt\" (UID: \"32cf65c7-c629-45c0-936b-23307f9d3ad8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.439246 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32cf65c7-c629-45c0-936b-23307f9d3ad8-secret-volume\") pod \"collect-profiles-29493255-8pnwt\" (UID: \"32cf65c7-c629-45c0-936b-23307f9d3ad8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.452688 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxtvh\" (UniqueName: \"kubernetes.io/projected/32cf65c7-c629-45c0-936b-23307f9d3ad8-kube-api-access-gxtvh\") pod \"collect-profiles-29493255-8pnwt\" (UID: \"32cf65c7-c629-45c0-936b-23307f9d3ad8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.467314 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt" Jan 28 10:15:00 crc kubenswrapper[4735]: I0128 10:15:00.928880 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt"] Jan 28 10:15:01 crc kubenswrapper[4735]: I0128 10:15:01.114789 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt" event={"ID":"32cf65c7-c629-45c0-936b-23307f9d3ad8","Type":"ContainerStarted","Data":"43a6452133c4568c7f38c8b158e0a5f4322ff88686df43e300758214f45e3a49"} Jan 28 10:15:02 crc kubenswrapper[4735]: I0128 10:15:02.126724 4735 generic.go:334] "Generic (PLEG): container finished" podID="fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1" containerID="058d53c26778b2389f5aab73bff9287590dadd89a35425372974383367398792" exitCode=0 Jan 28 10:15:02 crc kubenswrapper[4735]: I0128 10:15:02.126843 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" event={"ID":"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1","Type":"ContainerDied","Data":"058d53c26778b2389f5aab73bff9287590dadd89a35425372974383367398792"} Jan 28 10:15:02 crc kubenswrapper[4735]: I0128 10:15:02.132495 4735 generic.go:334] "Generic (PLEG): container finished" podID="32cf65c7-c629-45c0-936b-23307f9d3ad8" containerID="7df9d6fa0fa8958b4863b33991f13c17064a97aae1bba94899150123dcba567a" exitCode=0 Jan 28 10:15:02 crc kubenswrapper[4735]: I0128 10:15:02.132593 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt" event={"ID":"32cf65c7-c629-45c0-936b-23307f9d3ad8","Type":"ContainerDied","Data":"7df9d6fa0fa8958b4863b33991f13c17064a97aae1bba94899150123dcba567a"} Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.513641 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt" Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.593236 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxtvh\" (UniqueName: \"kubernetes.io/projected/32cf65c7-c629-45c0-936b-23307f9d3ad8-kube-api-access-gxtvh\") pod \"32cf65c7-c629-45c0-936b-23307f9d3ad8\" (UID: \"32cf65c7-c629-45c0-936b-23307f9d3ad8\") " Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.593368 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32cf65c7-c629-45c0-936b-23307f9d3ad8-secret-volume\") pod \"32cf65c7-c629-45c0-936b-23307f9d3ad8\" (UID: \"32cf65c7-c629-45c0-936b-23307f9d3ad8\") " Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.593642 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32cf65c7-c629-45c0-936b-23307f9d3ad8-config-volume\") pod \"32cf65c7-c629-45c0-936b-23307f9d3ad8\" (UID: \"32cf65c7-c629-45c0-936b-23307f9d3ad8\") " Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.596125 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32cf65c7-c629-45c0-936b-23307f9d3ad8-config-volume" (OuterVolumeSpecName: "config-volume") pod "32cf65c7-c629-45c0-936b-23307f9d3ad8" (UID: "32cf65c7-c629-45c0-936b-23307f9d3ad8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.601017 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32cf65c7-c629-45c0-936b-23307f9d3ad8-kube-api-access-gxtvh" (OuterVolumeSpecName: "kube-api-access-gxtvh") pod "32cf65c7-c629-45c0-936b-23307f9d3ad8" (UID: "32cf65c7-c629-45c0-936b-23307f9d3ad8"). InnerVolumeSpecName "kube-api-access-gxtvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.602665 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32cf65c7-c629-45c0-936b-23307f9d3ad8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "32cf65c7-c629-45c0-936b-23307f9d3ad8" (UID: "32cf65c7-c629-45c0-936b-23307f9d3ad8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.690176 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.695982 4735 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32cf65c7-c629-45c0-936b-23307f9d3ad8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.696028 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxtvh\" (UniqueName: \"kubernetes.io/projected/32cf65c7-c629-45c0-936b-23307f9d3ad8-kube-api-access-gxtvh\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.696039 4735 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32cf65c7-c629-45c0-936b-23307f9d3ad8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.797032 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-ssh-key-openstack-edpm-ipam\") pod \"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1\" (UID: \"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1\") " Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.797188 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8ln4\" (UniqueName: \"kubernetes.io/projected/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-kube-api-access-z8ln4\") pod \"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1\" (UID: \"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1\") " Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.797280 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-inventory\") pod \"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1\" (UID: \"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1\") " Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.802842 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-kube-api-access-z8ln4" (OuterVolumeSpecName: "kube-api-access-z8ln4") pod "fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1" (UID: "fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1"). InnerVolumeSpecName "kube-api-access-z8ln4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.829720 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1" (UID: "fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.829785 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-inventory" (OuterVolumeSpecName: "inventory") pod "fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1" (UID: "fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.900042 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.900100 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8ln4\" (UniqueName: \"kubernetes.io/projected/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-kube-api-access-z8ln4\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:03 crc kubenswrapper[4735]: I0128 10:15:03.900118 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.151485 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.151484 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc" event={"ID":"fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1","Type":"ContainerDied","Data":"6fe26c9215ba8dda8f67c98472f78216e0742588874295eeb0c74f6909ad2073"} Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.151604 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fe26c9215ba8dda8f67c98472f78216e0742588874295eeb0c74f6909ad2073" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.153538 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt" event={"ID":"32cf65c7-c629-45c0-936b-23307f9d3ad8","Type":"ContainerDied","Data":"43a6452133c4568c7f38c8b158e0a5f4322ff88686df43e300758214f45e3a49"} Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.153647 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43a6452133c4568c7f38c8b158e0a5f4322ff88686df43e300758214f45e3a49" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.153616 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493255-8pnwt" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.231458 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq"] Jan 28 10:15:04 crc kubenswrapper[4735]: E0128 10:15:04.231848 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.231867 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 10:15:04 crc kubenswrapper[4735]: E0128 10:15:04.231908 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32cf65c7-c629-45c0-936b-23307f9d3ad8" containerName="collect-profiles" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.231914 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="32cf65c7-c629-45c0-936b-23307f9d3ad8" containerName="collect-profiles" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.232088 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.232102 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="32cf65c7-c629-45c0-936b-23307f9d3ad8" containerName="collect-profiles" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.232694 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.238549 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.238655 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.241190 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.241280 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j996r" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.241342 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.241505 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.241862 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.242326 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.251102 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq"] Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.307279 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.307538 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.307657 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.307756 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnkbl\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-kube-api-access-lnkbl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.307837 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.307927 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.308023 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.308109 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.308190 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.308354 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.308431 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.308615 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.308679 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.308737 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.410428 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.410510 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.410552 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.410596 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.410638 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.410668 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.410717 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.410762 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.410797 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.410838 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.410914 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.410940 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.410964 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnkbl\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-kube-api-access-lnkbl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.410992 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.416112 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.417379 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.417550 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.418574 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.420277 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.420401 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.421446 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.421659 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.421688 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.422114 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.422147 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.423328 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.423607 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.430031 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnkbl\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-kube-api-access-lnkbl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-v46fq\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.552691 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.591566 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s"] Jan 28 10:15:04 crc kubenswrapper[4735]: I0128 10:15:04.601357 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493210-sh45s"] Jan 28 10:15:05 crc kubenswrapper[4735]: I0128 10:15:05.056898 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq"] Jan 28 10:15:05 crc kubenswrapper[4735]: W0128 10:15:05.063671 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9acc786_77bc_4555_b7f4_f0a337175a3c.slice/crio-961dfed79e17c95374eeeab4ee0faef8064b2c6474823d274f80e1e21d241c63 WatchSource:0}: Error finding container 961dfed79e17c95374eeeab4ee0faef8064b2c6474823d274f80e1e21d241c63: Status 404 returned error can't find the container with id 961dfed79e17c95374eeeab4ee0faef8064b2c6474823d274f80e1e21d241c63 Jan 28 10:15:05 crc kubenswrapper[4735]: I0128 10:15:05.162445 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" event={"ID":"e9acc786-77bc-4555-b7f4-f0a337175a3c","Type":"ContainerStarted","Data":"961dfed79e17c95374eeeab4ee0faef8064b2c6474823d274f80e1e21d241c63"} Jan 28 10:15:05 crc kubenswrapper[4735]: I0128 10:15:05.547349 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0a428da-f914-4463-8baf-899aed0920b1" path="/var/lib/kubelet/pods/e0a428da-f914-4463-8baf-899aed0920b1/volumes" Jan 28 10:15:07 crc kubenswrapper[4735]: I0128 10:15:07.183083 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" event={"ID":"e9acc786-77bc-4555-b7f4-f0a337175a3c","Type":"ContainerStarted","Data":"124ced47daea841f9fd3e3395c3e167dd3e6aad2afa455058d6180f3c0d99586"} Jan 28 10:15:07 crc kubenswrapper[4735]: I0128 10:15:07.213163 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" podStartSLOduration=2.193287318 podStartE2EDuration="3.213134199s" podCreationTimestamp="2026-01-28 10:15:04 +0000 UTC" firstStartedPulling="2026-01-28 10:15:05.066981467 +0000 UTC m=+1958.216645840" lastFinishedPulling="2026-01-28 10:15:06.086828348 +0000 UTC m=+1959.236492721" observedRunningTime="2026-01-28 10:15:07.203273969 +0000 UTC m=+1960.352938342" watchObservedRunningTime="2026-01-28 10:15:07.213134199 +0000 UTC m=+1960.362798592" Jan 28 10:15:35 crc kubenswrapper[4735]: I0128 10:15:35.430265 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:15:35 crc kubenswrapper[4735]: I0128 10:15:35.431004 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:15:42 crc kubenswrapper[4735]: I0128 10:15:42.486222 4735 generic.go:334] "Generic (PLEG): container finished" podID="e9acc786-77bc-4555-b7f4-f0a337175a3c" containerID="124ced47daea841f9fd3e3395c3e167dd3e6aad2afa455058d6180f3c0d99586" exitCode=0 Jan 28 10:15:42 crc kubenswrapper[4735]: I0128 10:15:42.486316 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" event={"ID":"e9acc786-77bc-4555-b7f4-f0a337175a3c","Type":"ContainerDied","Data":"124ced47daea841f9fd3e3395c3e167dd3e6aad2afa455058d6180f3c0d99586"} Jan 28 10:15:43 crc kubenswrapper[4735]: I0128 10:15:43.957834 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:43 crc kubenswrapper[4735]: I0128 10:15:43.971657 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-repo-setup-combined-ca-bundle\") pod \"e9acc786-77bc-4555-b7f4-f0a337175a3c\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " Jan 28 10:15:43 crc kubenswrapper[4735]: I0128 10:15:43.971732 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"e9acc786-77bc-4555-b7f4-f0a337175a3c\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " Jan 28 10:15:43 crc kubenswrapper[4735]: I0128 10:15:43.971795 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-ssh-key-openstack-edpm-ipam\") pod \"e9acc786-77bc-4555-b7f4-f0a337175a3c\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " Jan 28 10:15:43 crc kubenswrapper[4735]: I0128 10:15:43.971825 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"e9acc786-77bc-4555-b7f4-f0a337175a3c\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " Jan 28 10:15:43 crc kubenswrapper[4735]: I0128 10:15:43.971878 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-libvirt-combined-ca-bundle\") pod \"e9acc786-77bc-4555-b7f4-f0a337175a3c\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " Jan 28 10:15:43 crc kubenswrapper[4735]: I0128 10:15:43.971923 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"e9acc786-77bc-4555-b7f4-f0a337175a3c\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " Jan 28 10:15:43 crc kubenswrapper[4735]: I0128 10:15:43.971964 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-inventory\") pod \"e9acc786-77bc-4555-b7f4-f0a337175a3c\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " Jan 28 10:15:43 crc kubenswrapper[4735]: I0128 10:15:43.972002 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-ovn-combined-ca-bundle\") pod \"e9acc786-77bc-4555-b7f4-f0a337175a3c\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " Jan 28 10:15:43 crc kubenswrapper[4735]: I0128 10:15:43.972054 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-nova-combined-ca-bundle\") pod \"e9acc786-77bc-4555-b7f4-f0a337175a3c\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " Jan 28 10:15:43 crc kubenswrapper[4735]: I0128 10:15:43.972083 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-telemetry-combined-ca-bundle\") pod \"e9acc786-77bc-4555-b7f4-f0a337175a3c\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " Jan 28 10:15:43 crc kubenswrapper[4735]: I0128 10:15:43.972176 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"e9acc786-77bc-4555-b7f4-f0a337175a3c\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " Jan 28 10:15:43 crc kubenswrapper[4735]: I0128 10:15:43.972285 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-bootstrap-combined-ca-bundle\") pod \"e9acc786-77bc-4555-b7f4-f0a337175a3c\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " Jan 28 10:15:43 crc kubenswrapper[4735]: I0128 10:15:43.972323 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-neutron-metadata-combined-ca-bundle\") pod \"e9acc786-77bc-4555-b7f4-f0a337175a3c\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " Jan 28 10:15:43 crc kubenswrapper[4735]: I0128 10:15:43.972352 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnkbl\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-kube-api-access-lnkbl\") pod \"e9acc786-77bc-4555-b7f4-f0a337175a3c\" (UID: \"e9acc786-77bc-4555-b7f4-f0a337175a3c\") " Jan 28 10:15:43 crc kubenswrapper[4735]: I0128 10:15:43.991077 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "e9acc786-77bc-4555-b7f4-f0a337175a3c" (UID: "e9acc786-77bc-4555-b7f4-f0a337175a3c"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.001806 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-kube-api-access-lnkbl" (OuterVolumeSpecName: "kube-api-access-lnkbl") pod "e9acc786-77bc-4555-b7f4-f0a337175a3c" (UID: "e9acc786-77bc-4555-b7f4-f0a337175a3c"). InnerVolumeSpecName "kube-api-access-lnkbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.005280 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "e9acc786-77bc-4555-b7f4-f0a337175a3c" (UID: "e9acc786-77bc-4555-b7f4-f0a337175a3c"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.008866 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e9acc786-77bc-4555-b7f4-f0a337175a3c" (UID: "e9acc786-77bc-4555-b7f4-f0a337175a3c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.022763 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e9acc786-77bc-4555-b7f4-f0a337175a3c" (UID: "e9acc786-77bc-4555-b7f4-f0a337175a3c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.022803 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "e9acc786-77bc-4555-b7f4-f0a337175a3c" (UID: "e9acc786-77bc-4555-b7f4-f0a337175a3c"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.022871 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e9acc786-77bc-4555-b7f4-f0a337175a3c" (UID: "e9acc786-77bc-4555-b7f4-f0a337175a3c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.022911 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e9acc786-77bc-4555-b7f4-f0a337175a3c" (UID: "e9acc786-77bc-4555-b7f4-f0a337175a3c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.022979 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e9acc786-77bc-4555-b7f4-f0a337175a3c" (UID: "e9acc786-77bc-4555-b7f4-f0a337175a3c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.023256 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e9acc786-77bc-4555-b7f4-f0a337175a3c" (UID: "e9acc786-77bc-4555-b7f4-f0a337175a3c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.028001 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e9acc786-77bc-4555-b7f4-f0a337175a3c" (UID: "e9acc786-77bc-4555-b7f4-f0a337175a3c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.038143 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "e9acc786-77bc-4555-b7f4-f0a337175a3c" (UID: "e9acc786-77bc-4555-b7f4-f0a337175a3c"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.066120 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-inventory" (OuterVolumeSpecName: "inventory") pod "e9acc786-77bc-4555-b7f4-f0a337175a3c" (UID: "e9acc786-77bc-4555-b7f4-f0a337175a3c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.076251 4735 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.076320 4735 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.076340 4735 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.076364 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnkbl\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-kube-api-access-lnkbl\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.076380 4735 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.076397 4735 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.076412 4735 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.076428 4735 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.076443 4735 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e9acc786-77bc-4555-b7f4-f0a337175a3c-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.076460 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.076473 4735 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.076489 4735 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.076505 4735 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.104950 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e9acc786-77bc-4555-b7f4-f0a337175a3c" (UID: "e9acc786-77bc-4555-b7f4-f0a337175a3c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.177988 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e9acc786-77bc-4555-b7f4-f0a337175a3c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.505780 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" event={"ID":"e9acc786-77bc-4555-b7f4-f0a337175a3c","Type":"ContainerDied","Data":"961dfed79e17c95374eeeab4ee0faef8064b2c6474823d274f80e1e21d241c63"} Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.505821 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="961dfed79e17c95374eeeab4ee0faef8064b2c6474823d274f80e1e21d241c63" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.505875 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-v46fq" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.678691 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q"] Jan 28 10:15:44 crc kubenswrapper[4735]: E0128 10:15:44.679133 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9acc786-77bc-4555-b7f4-f0a337175a3c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.679154 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9acc786-77bc-4555-b7f4-f0a337175a3c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.679343 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9acc786-77bc-4555-b7f4-f0a337175a3c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.680047 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.682906 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.682995 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.683272 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j996r" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.684317 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.684491 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.694712 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q"] Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.790328 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4l92q\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.790454 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4l92q\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.790511 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkc7h\" (UniqueName: \"kubernetes.io/projected/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-kube-api-access-hkc7h\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4l92q\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.790547 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4l92q\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.790586 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4l92q\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.891973 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4l92q\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.892083 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4l92q\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.892134 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkc7h\" (UniqueName: \"kubernetes.io/projected/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-kube-api-access-hkc7h\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4l92q\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.892165 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4l92q\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.892203 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4l92q\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.893825 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4l92q\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.897186 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4l92q\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.898383 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4l92q\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.898772 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4l92q\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.914561 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkc7h\" (UniqueName: \"kubernetes.io/projected/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-kube-api-access-hkc7h\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4l92q\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:15:44 crc kubenswrapper[4735]: I0128 10:15:44.998079 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:15:45 crc kubenswrapper[4735]: I0128 10:15:45.535030 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q"] Jan 28 10:15:46 crc kubenswrapper[4735]: I0128 10:15:46.539011 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" event={"ID":"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac","Type":"ContainerStarted","Data":"4296d1e4517a5cf013360c7634ad94d9bc3afbaaf2ea4ba31de811b9481ab24d"} Jan 28 10:15:50 crc kubenswrapper[4735]: I0128 10:15:50.570883 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" event={"ID":"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac","Type":"ContainerStarted","Data":"655c000728d0f4f6a4bf143163388940c92cdfa29b5507326aa9b0526d7eb20d"} Jan 28 10:15:50 crc kubenswrapper[4735]: I0128 10:15:50.590588 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" podStartSLOduration=2.051707604 podStartE2EDuration="6.590565183s" podCreationTimestamp="2026-01-28 10:15:44 +0000 UTC" firstStartedPulling="2026-01-28 10:15:45.542403458 +0000 UTC m=+1998.692067831" lastFinishedPulling="2026-01-28 10:15:50.081261037 +0000 UTC m=+2003.230925410" observedRunningTime="2026-01-28 10:15:50.58589246 +0000 UTC m=+2003.735556833" watchObservedRunningTime="2026-01-28 10:15:50.590565183 +0000 UTC m=+2003.740229556" Jan 28 10:15:51 crc kubenswrapper[4735]: I0128 10:15:51.296601 4735 scope.go:117] "RemoveContainer" containerID="36925ff56fd907ec6648e9ce50b3fedd34a9ef644e25027d95eb08e527183dd5" Jan 28 10:16:05 crc kubenswrapper[4735]: I0128 10:16:05.429877 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:16:05 crc kubenswrapper[4735]: I0128 10:16:05.430543 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:16:35 crc kubenswrapper[4735]: I0128 10:16:35.430292 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:16:35 crc kubenswrapper[4735]: I0128 10:16:35.430849 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:16:35 crc kubenswrapper[4735]: I0128 10:16:35.430897 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 10:16:35 crc kubenswrapper[4735]: I0128 10:16:35.431627 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ca59299c75c0261d224f376d225e4cfae82f9e72435765962d543c393946e9a"} pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 10:16:35 crc kubenswrapper[4735]: I0128 10:16:35.431670 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" containerID="cri-o://0ca59299c75c0261d224f376d225e4cfae82f9e72435765962d543c393946e9a" gracePeriod=600 Jan 28 10:16:35 crc kubenswrapper[4735]: I0128 10:16:35.995306 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerID="0ca59299c75c0261d224f376d225e4cfae82f9e72435765962d543c393946e9a" exitCode=0 Jan 28 10:16:35 crc kubenswrapper[4735]: I0128 10:16:35.995359 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerDied","Data":"0ca59299c75c0261d224f376d225e4cfae82f9e72435765962d543c393946e9a"} Jan 28 10:16:35 crc kubenswrapper[4735]: I0128 10:16:35.995740 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde"} Jan 28 10:16:35 crc kubenswrapper[4735]: I0128 10:16:35.995784 4735 scope.go:117] "RemoveContainer" containerID="07b93bdcd18766809127a9e0c7c53aba857d82df8a42f7e4a2f4e3abe2a2f96b" Jan 28 10:16:53 crc kubenswrapper[4735]: I0128 10:16:53.174642 4735 generic.go:334] "Generic (PLEG): container finished" podID="bbd83f26-98f3-47a9-9fa0-7225ba6b97ac" containerID="655c000728d0f4f6a4bf143163388940c92cdfa29b5507326aa9b0526d7eb20d" exitCode=0 Jan 28 10:16:53 crc kubenswrapper[4735]: I0128 10:16:53.174786 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" event={"ID":"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac","Type":"ContainerDied","Data":"655c000728d0f4f6a4bf143163388940c92cdfa29b5507326aa9b0526d7eb20d"} Jan 28 10:16:54 crc kubenswrapper[4735]: I0128 10:16:54.605861 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:16:54 crc kubenswrapper[4735]: I0128 10:16:54.801801 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ovn-combined-ca-bundle\") pod \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " Jan 28 10:16:54 crc kubenswrapper[4735]: I0128 10:16:54.802008 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ssh-key-openstack-edpm-ipam\") pod \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " Jan 28 10:16:54 crc kubenswrapper[4735]: I0128 10:16:54.802048 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-inventory\") pod \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " Jan 28 10:16:54 crc kubenswrapper[4735]: I0128 10:16:54.802146 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkc7h\" (UniqueName: \"kubernetes.io/projected/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-kube-api-access-hkc7h\") pod \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " Jan 28 10:16:54 crc kubenswrapper[4735]: I0128 10:16:54.802320 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ovncontroller-config-0\") pod \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\" (UID: \"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac\") " Jan 28 10:16:54 crc kubenswrapper[4735]: I0128 10:16:54.807822 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "bbd83f26-98f3-47a9-9fa0-7225ba6b97ac" (UID: "bbd83f26-98f3-47a9-9fa0-7225ba6b97ac"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:16:54 crc kubenswrapper[4735]: I0128 10:16:54.807885 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-kube-api-access-hkc7h" (OuterVolumeSpecName: "kube-api-access-hkc7h") pod "bbd83f26-98f3-47a9-9fa0-7225ba6b97ac" (UID: "bbd83f26-98f3-47a9-9fa0-7225ba6b97ac"). InnerVolumeSpecName "kube-api-access-hkc7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:16:54 crc kubenswrapper[4735]: I0128 10:16:54.830923 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "bbd83f26-98f3-47a9-9fa0-7225ba6b97ac" (UID: "bbd83f26-98f3-47a9-9fa0-7225ba6b97ac"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:16:54 crc kubenswrapper[4735]: I0128 10:16:54.832137 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bbd83f26-98f3-47a9-9fa0-7225ba6b97ac" (UID: "bbd83f26-98f3-47a9-9fa0-7225ba6b97ac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:16:54 crc kubenswrapper[4735]: I0128 10:16:54.834628 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-inventory" (OuterVolumeSpecName: "inventory") pod "bbd83f26-98f3-47a9-9fa0-7225ba6b97ac" (UID: "bbd83f26-98f3-47a9-9fa0-7225ba6b97ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:16:54 crc kubenswrapper[4735]: I0128 10:16:54.904355 4735 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:16:54 crc kubenswrapper[4735]: I0128 10:16:54.904731 4735 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:16:54 crc kubenswrapper[4735]: I0128 10:16:54.904779 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:16:54 crc kubenswrapper[4735]: I0128 10:16:54.904795 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 10:16:54 crc kubenswrapper[4735]: I0128 10:16:54.904807 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkc7h\" (UniqueName: \"kubernetes.io/projected/bbd83f26-98f3-47a9-9fa0-7225ba6b97ac-kube-api-access-hkc7h\") on node \"crc\" DevicePath \"\"" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.192208 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" event={"ID":"bbd83f26-98f3-47a9-9fa0-7225ba6b97ac","Type":"ContainerDied","Data":"4296d1e4517a5cf013360c7634ad94d9bc3afbaaf2ea4ba31de811b9481ab24d"} Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.192260 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4l92q" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.192264 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4296d1e4517a5cf013360c7634ad94d9bc3afbaaf2ea4ba31de811b9481ab24d" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.304583 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb"] Jan 28 10:16:55 crc kubenswrapper[4735]: E0128 10:16:55.305232 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd83f26-98f3-47a9-9fa0-7225ba6b97ac" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.305327 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd83f26-98f3-47a9-9fa0-7225ba6b97ac" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.305603 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbd83f26-98f3-47a9-9fa0-7225ba6b97ac" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.306334 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.310487 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.310489 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.311511 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.311869 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.312055 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.312056 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j996r" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.317312 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb"] Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.411942 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9cvl\" (UniqueName: \"kubernetes.io/projected/7202ad41-aa50-4012-a34d-df9d19e34a1d-kube-api-access-n9cvl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.411993 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.412084 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.412145 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.412189 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.412304 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.514088 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.514185 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.514235 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9cvl\" (UniqueName: \"kubernetes.io/projected/7202ad41-aa50-4012-a34d-df9d19e34a1d-kube-api-access-n9cvl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.514257 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.514311 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.514378 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.518938 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.519013 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.519630 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.520118 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.522624 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.551328 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9cvl\" (UniqueName: \"kubernetes.io/projected/7202ad41-aa50-4012-a34d-df9d19e34a1d-kube-api-access-n9cvl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:55 crc kubenswrapper[4735]: I0128 10:16:55.624439 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:16:56 crc kubenswrapper[4735]: I0128 10:16:56.177443 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb"] Jan 28 10:16:56 crc kubenswrapper[4735]: W0128 10:16:56.178544 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7202ad41_aa50_4012_a34d_df9d19e34a1d.slice/crio-2b39d71bf42505e479c8e13c310313d96212c13f249f07c88c216c5ef4cd2675 WatchSource:0}: Error finding container 2b39d71bf42505e479c8e13c310313d96212c13f249f07c88c216c5ef4cd2675: Status 404 returned error can't find the container with id 2b39d71bf42505e479c8e13c310313d96212c13f249f07c88c216c5ef4cd2675 Jan 28 10:16:56 crc kubenswrapper[4735]: I0128 10:16:56.182852 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 10:16:56 crc kubenswrapper[4735]: I0128 10:16:56.202297 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" event={"ID":"7202ad41-aa50-4012-a34d-df9d19e34a1d","Type":"ContainerStarted","Data":"2b39d71bf42505e479c8e13c310313d96212c13f249f07c88c216c5ef4cd2675"} Jan 28 10:16:57 crc kubenswrapper[4735]: I0128 10:16:57.211442 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" event={"ID":"7202ad41-aa50-4012-a34d-df9d19e34a1d","Type":"ContainerStarted","Data":"6edb277743f08dbfa203edb3a515f75940178d0f30df0bcc9a6f65ed34134f13"} Jan 28 10:17:44 crc kubenswrapper[4735]: I0128 10:17:44.610699 4735 generic.go:334] "Generic (PLEG): container finished" podID="7202ad41-aa50-4012-a34d-df9d19e34a1d" containerID="6edb277743f08dbfa203edb3a515f75940178d0f30df0bcc9a6f65ed34134f13" exitCode=0 Jan 28 10:17:44 crc kubenswrapper[4735]: I0128 10:17:44.610845 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" event={"ID":"7202ad41-aa50-4012-a34d-df9d19e34a1d","Type":"ContainerDied","Data":"6edb277743f08dbfa203edb3a515f75940178d0f30df0bcc9a6f65ed34134f13"} Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.043222 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.195626 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-nova-metadata-neutron-config-0\") pod \"7202ad41-aa50-4012-a34d-df9d19e34a1d\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.195727 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9cvl\" (UniqueName: \"kubernetes.io/projected/7202ad41-aa50-4012-a34d-df9d19e34a1d-kube-api-access-n9cvl\") pod \"7202ad41-aa50-4012-a34d-df9d19e34a1d\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.195826 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-ssh-key-openstack-edpm-ipam\") pod \"7202ad41-aa50-4012-a34d-df9d19e34a1d\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.195897 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-neutron-metadata-combined-ca-bundle\") pod \"7202ad41-aa50-4012-a34d-df9d19e34a1d\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.195952 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7202ad41-aa50-4012-a34d-df9d19e34a1d\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.196557 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-inventory\") pod \"7202ad41-aa50-4012-a34d-df9d19e34a1d\" (UID: \"7202ad41-aa50-4012-a34d-df9d19e34a1d\") " Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.201634 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7202ad41-aa50-4012-a34d-df9d19e34a1d" (UID: "7202ad41-aa50-4012-a34d-df9d19e34a1d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.202010 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7202ad41-aa50-4012-a34d-df9d19e34a1d-kube-api-access-n9cvl" (OuterVolumeSpecName: "kube-api-access-n9cvl") pod "7202ad41-aa50-4012-a34d-df9d19e34a1d" (UID: "7202ad41-aa50-4012-a34d-df9d19e34a1d"). InnerVolumeSpecName "kube-api-access-n9cvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.226518 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-inventory" (OuterVolumeSpecName: "inventory") pod "7202ad41-aa50-4012-a34d-df9d19e34a1d" (UID: "7202ad41-aa50-4012-a34d-df9d19e34a1d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.228455 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7202ad41-aa50-4012-a34d-df9d19e34a1d" (UID: "7202ad41-aa50-4012-a34d-df9d19e34a1d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.230067 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7202ad41-aa50-4012-a34d-df9d19e34a1d" (UID: "7202ad41-aa50-4012-a34d-df9d19e34a1d"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.238154 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7202ad41-aa50-4012-a34d-df9d19e34a1d" (UID: "7202ad41-aa50-4012-a34d-df9d19e34a1d"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.299430 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.299588 4735 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.299651 4735 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.299817 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.299906 4735 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7202ad41-aa50-4012-a34d-df9d19e34a1d-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.299980 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9cvl\" (UniqueName: \"kubernetes.io/projected/7202ad41-aa50-4012-a34d-df9d19e34a1d-kube-api-access-n9cvl\") on node \"crc\" DevicePath \"\"" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.631457 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" event={"ID":"7202ad41-aa50-4012-a34d-df9d19e34a1d","Type":"ContainerDied","Data":"2b39d71bf42505e479c8e13c310313d96212c13f249f07c88c216c5ef4cd2675"} Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.631502 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b39d71bf42505e479c8e13c310313d96212c13f249f07c88c216c5ef4cd2675" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.631578 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.732270 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4"] Jan 28 10:17:46 crc kubenswrapper[4735]: E0128 10:17:46.732719 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7202ad41-aa50-4012-a34d-df9d19e34a1d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.732741 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="7202ad41-aa50-4012-a34d-df9d19e34a1d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.733004 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="7202ad41-aa50-4012-a34d-df9d19e34a1d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.733717 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.735756 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.735787 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.735931 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.736417 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j996r" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.737973 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.741888 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4"] Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.912366 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.912440 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.912867 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.912922 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:17:46 crc kubenswrapper[4735]: I0128 10:17:46.913046 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7tgn\" (UniqueName: \"kubernetes.io/projected/7c337230-3c7c-49aa-97cb-576327428c01-kube-api-access-j7tgn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:17:47 crc kubenswrapper[4735]: I0128 10:17:47.014993 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7tgn\" (UniqueName: \"kubernetes.io/projected/7c337230-3c7c-49aa-97cb-576327428c01-kube-api-access-j7tgn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:17:47 crc kubenswrapper[4735]: I0128 10:17:47.015072 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:17:47 crc kubenswrapper[4735]: I0128 10:17:47.015103 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:17:47 crc kubenswrapper[4735]: I0128 10:17:47.015216 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:17:47 crc kubenswrapper[4735]: I0128 10:17:47.015244 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:17:47 crc kubenswrapper[4735]: I0128 10:17:47.019885 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:17:47 crc kubenswrapper[4735]: I0128 10:17:47.020045 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:17:47 crc kubenswrapper[4735]: I0128 10:17:47.020114 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:17:47 crc kubenswrapper[4735]: I0128 10:17:47.021306 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:17:47 crc kubenswrapper[4735]: I0128 10:17:47.033208 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7tgn\" (UniqueName: \"kubernetes.io/projected/7c337230-3c7c-49aa-97cb-576327428c01-kube-api-access-j7tgn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:17:47 crc kubenswrapper[4735]: I0128 10:17:47.052083 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:17:47 crc kubenswrapper[4735]: I0128 10:17:47.576794 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4"] Jan 28 10:17:47 crc kubenswrapper[4735]: W0128 10:17:47.581944 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c337230_3c7c_49aa_97cb_576327428c01.slice/crio-5778a44943303d10623dcddb545f952b1c050316e0cbeeaaa7ae3e09558900dc WatchSource:0}: Error finding container 5778a44943303d10623dcddb545f952b1c050316e0cbeeaaa7ae3e09558900dc: Status 404 returned error can't find the container with id 5778a44943303d10623dcddb545f952b1c050316e0cbeeaaa7ae3e09558900dc Jan 28 10:17:47 crc kubenswrapper[4735]: I0128 10:17:47.645313 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" event={"ID":"7c337230-3c7c-49aa-97cb-576327428c01","Type":"ContainerStarted","Data":"5778a44943303d10623dcddb545f952b1c050316e0cbeeaaa7ae3e09558900dc"} Jan 28 10:17:49 crc kubenswrapper[4735]: I0128 10:17:49.664034 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" event={"ID":"7c337230-3c7c-49aa-97cb-576327428c01","Type":"ContainerStarted","Data":"b0f5b4ec8646f3ffc374c57bb5af6f2bd3960a28e5dfb64063253008d8f8747a"} Jan 28 10:17:49 crc kubenswrapper[4735]: I0128 10:17:49.695863 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" podStartSLOduration=2.576340116 podStartE2EDuration="3.695836184s" podCreationTimestamp="2026-01-28 10:17:46 +0000 UTC" firstStartedPulling="2026-01-28 10:17:47.584719356 +0000 UTC m=+2120.734383729" lastFinishedPulling="2026-01-28 10:17:48.704215424 +0000 UTC m=+2121.853879797" observedRunningTime="2026-01-28 10:17:49.681574799 +0000 UTC m=+2122.831239172" watchObservedRunningTime="2026-01-28 10:17:49.695836184 +0000 UTC m=+2122.845500597" Jan 28 10:18:35 crc kubenswrapper[4735]: I0128 10:18:35.429799 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:18:35 crc kubenswrapper[4735]: I0128 10:18:35.431037 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:18:44 crc kubenswrapper[4735]: I0128 10:18:44.567873 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jc7rs"] Jan 28 10:18:44 crc kubenswrapper[4735]: I0128 10:18:44.570551 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jc7rs" Jan 28 10:18:44 crc kubenswrapper[4735]: I0128 10:18:44.579462 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jc7rs"] Jan 28 10:18:44 crc kubenswrapper[4735]: I0128 10:18:44.739419 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9z9b\" (UniqueName: \"kubernetes.io/projected/b398b0f1-3c91-4249-8644-e91704da8778-kube-api-access-t9z9b\") pod \"redhat-marketplace-jc7rs\" (UID: \"b398b0f1-3c91-4249-8644-e91704da8778\") " pod="openshift-marketplace/redhat-marketplace-jc7rs" Jan 28 10:18:44 crc kubenswrapper[4735]: I0128 10:18:44.739662 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b398b0f1-3c91-4249-8644-e91704da8778-utilities\") pod \"redhat-marketplace-jc7rs\" (UID: \"b398b0f1-3c91-4249-8644-e91704da8778\") " pod="openshift-marketplace/redhat-marketplace-jc7rs" Jan 28 10:18:44 crc kubenswrapper[4735]: I0128 10:18:44.740160 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b398b0f1-3c91-4249-8644-e91704da8778-catalog-content\") pod \"redhat-marketplace-jc7rs\" (UID: \"b398b0f1-3c91-4249-8644-e91704da8778\") " pod="openshift-marketplace/redhat-marketplace-jc7rs" Jan 28 10:18:44 crc kubenswrapper[4735]: I0128 10:18:44.842512 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b398b0f1-3c91-4249-8644-e91704da8778-catalog-content\") pod \"redhat-marketplace-jc7rs\" (UID: \"b398b0f1-3c91-4249-8644-e91704da8778\") " pod="openshift-marketplace/redhat-marketplace-jc7rs" Jan 28 10:18:44 crc kubenswrapper[4735]: I0128 10:18:44.842874 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9z9b\" (UniqueName: \"kubernetes.io/projected/b398b0f1-3c91-4249-8644-e91704da8778-kube-api-access-t9z9b\") pod \"redhat-marketplace-jc7rs\" (UID: \"b398b0f1-3c91-4249-8644-e91704da8778\") " pod="openshift-marketplace/redhat-marketplace-jc7rs" Jan 28 10:18:44 crc kubenswrapper[4735]: I0128 10:18:44.843038 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b398b0f1-3c91-4249-8644-e91704da8778-utilities\") pod \"redhat-marketplace-jc7rs\" (UID: \"b398b0f1-3c91-4249-8644-e91704da8778\") " pod="openshift-marketplace/redhat-marketplace-jc7rs" Jan 28 10:18:44 crc kubenswrapper[4735]: I0128 10:18:44.843053 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b398b0f1-3c91-4249-8644-e91704da8778-catalog-content\") pod \"redhat-marketplace-jc7rs\" (UID: \"b398b0f1-3c91-4249-8644-e91704da8778\") " pod="openshift-marketplace/redhat-marketplace-jc7rs" Jan 28 10:18:44 crc kubenswrapper[4735]: I0128 10:18:44.843278 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b398b0f1-3c91-4249-8644-e91704da8778-utilities\") pod \"redhat-marketplace-jc7rs\" (UID: \"b398b0f1-3c91-4249-8644-e91704da8778\") " pod="openshift-marketplace/redhat-marketplace-jc7rs" Jan 28 10:18:44 crc kubenswrapper[4735]: I0128 10:18:44.880656 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9z9b\" (UniqueName: \"kubernetes.io/projected/b398b0f1-3c91-4249-8644-e91704da8778-kube-api-access-t9z9b\") pod \"redhat-marketplace-jc7rs\" (UID: \"b398b0f1-3c91-4249-8644-e91704da8778\") " pod="openshift-marketplace/redhat-marketplace-jc7rs" Jan 28 10:18:44 crc kubenswrapper[4735]: I0128 10:18:44.886659 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jc7rs" Jan 28 10:18:45 crc kubenswrapper[4735]: I0128 10:18:45.359175 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jc7rs"] Jan 28 10:18:46 crc kubenswrapper[4735]: I0128 10:18:46.123612 4735 generic.go:334] "Generic (PLEG): container finished" podID="b398b0f1-3c91-4249-8644-e91704da8778" containerID="01d662c7def5163c37c35bc3d4b2a6a86a49005afdc87363e7e6c75a847b3966" exitCode=0 Jan 28 10:18:46 crc kubenswrapper[4735]: I0128 10:18:46.123667 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc7rs" event={"ID":"b398b0f1-3c91-4249-8644-e91704da8778","Type":"ContainerDied","Data":"01d662c7def5163c37c35bc3d4b2a6a86a49005afdc87363e7e6c75a847b3966"} Jan 28 10:18:46 crc kubenswrapper[4735]: I0128 10:18:46.123699 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc7rs" event={"ID":"b398b0f1-3c91-4249-8644-e91704da8778","Type":"ContainerStarted","Data":"7c41313abc10e675d911e37109eb77f624a5f555d8e3ff8885c0bda1b2a23fc3"} Jan 28 10:18:48 crc kubenswrapper[4735]: I0128 10:18:48.147688 4735 generic.go:334] "Generic (PLEG): container finished" podID="b398b0f1-3c91-4249-8644-e91704da8778" containerID="307a3d0b2558598c6db3b7828586ef18b2323459f91d5dec8669bc12108503ca" exitCode=0 Jan 28 10:18:48 crc kubenswrapper[4735]: I0128 10:18:48.148000 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc7rs" event={"ID":"b398b0f1-3c91-4249-8644-e91704da8778","Type":"ContainerDied","Data":"307a3d0b2558598c6db3b7828586ef18b2323459f91d5dec8669bc12108503ca"} Jan 28 10:18:49 crc kubenswrapper[4735]: I0128 10:18:49.166035 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc7rs" event={"ID":"b398b0f1-3c91-4249-8644-e91704da8778","Type":"ContainerStarted","Data":"0bd34f4169b56a78716db5600837f3ebabe2ae13358a36b07cc69b2b881e9d9f"} Jan 28 10:18:49 crc kubenswrapper[4735]: I0128 10:18:49.185905 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jc7rs" podStartSLOduration=2.647484278 podStartE2EDuration="5.185886916s" podCreationTimestamp="2026-01-28 10:18:44 +0000 UTC" firstStartedPulling="2026-01-28 10:18:46.125521315 +0000 UTC m=+2179.275185688" lastFinishedPulling="2026-01-28 10:18:48.663923963 +0000 UTC m=+2181.813588326" observedRunningTime="2026-01-28 10:18:49.182117645 +0000 UTC m=+2182.331782018" watchObservedRunningTime="2026-01-28 10:18:49.185886916 +0000 UTC m=+2182.335551289" Jan 28 10:18:54 crc kubenswrapper[4735]: I0128 10:18:54.888179 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jc7rs" Jan 28 10:18:54 crc kubenswrapper[4735]: I0128 10:18:54.889142 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jc7rs" Jan 28 10:18:54 crc kubenswrapper[4735]: I0128 10:18:54.939546 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jc7rs" Jan 28 10:18:55 crc kubenswrapper[4735]: I0128 10:18:55.275480 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jc7rs" Jan 28 10:18:56 crc kubenswrapper[4735]: I0128 10:18:56.356238 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jc7rs"] Jan 28 10:18:57 crc kubenswrapper[4735]: I0128 10:18:57.242355 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jc7rs" podUID="b398b0f1-3c91-4249-8644-e91704da8778" containerName="registry-server" containerID="cri-o://0bd34f4169b56a78716db5600837f3ebabe2ae13358a36b07cc69b2b881e9d9f" gracePeriod=2 Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.235171 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jc7rs" Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.255671 4735 generic.go:334] "Generic (PLEG): container finished" podID="b398b0f1-3c91-4249-8644-e91704da8778" containerID="0bd34f4169b56a78716db5600837f3ebabe2ae13358a36b07cc69b2b881e9d9f" exitCode=0 Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.255728 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc7rs" event={"ID":"b398b0f1-3c91-4249-8644-e91704da8778","Type":"ContainerDied","Data":"0bd34f4169b56a78716db5600837f3ebabe2ae13358a36b07cc69b2b881e9d9f"} Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.255778 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc7rs" event={"ID":"b398b0f1-3c91-4249-8644-e91704da8778","Type":"ContainerDied","Data":"7c41313abc10e675d911e37109eb77f624a5f555d8e3ff8885c0bda1b2a23fc3"} Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.255780 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jc7rs" Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.255798 4735 scope.go:117] "RemoveContainer" containerID="0bd34f4169b56a78716db5600837f3ebabe2ae13358a36b07cc69b2b881e9d9f" Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.282326 4735 scope.go:117] "RemoveContainer" containerID="307a3d0b2558598c6db3b7828586ef18b2323459f91d5dec8669bc12108503ca" Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.305391 4735 scope.go:117] "RemoveContainer" containerID="01d662c7def5163c37c35bc3d4b2a6a86a49005afdc87363e7e6c75a847b3966" Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.352138 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b398b0f1-3c91-4249-8644-e91704da8778-utilities\") pod \"b398b0f1-3c91-4249-8644-e91704da8778\" (UID: \"b398b0f1-3c91-4249-8644-e91704da8778\") " Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.353105 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9z9b\" (UniqueName: \"kubernetes.io/projected/b398b0f1-3c91-4249-8644-e91704da8778-kube-api-access-t9z9b\") pod \"b398b0f1-3c91-4249-8644-e91704da8778\" (UID: \"b398b0f1-3c91-4249-8644-e91704da8778\") " Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.353794 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b398b0f1-3c91-4249-8644-e91704da8778-catalog-content\") pod \"b398b0f1-3c91-4249-8644-e91704da8778\" (UID: \"b398b0f1-3c91-4249-8644-e91704da8778\") " Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.354254 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b398b0f1-3c91-4249-8644-e91704da8778-utilities" (OuterVolumeSpecName: "utilities") pod "b398b0f1-3c91-4249-8644-e91704da8778" (UID: "b398b0f1-3c91-4249-8644-e91704da8778"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.355010 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b398b0f1-3c91-4249-8644-e91704da8778-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.359498 4735 scope.go:117] "RemoveContainer" containerID="0bd34f4169b56a78716db5600837f3ebabe2ae13358a36b07cc69b2b881e9d9f" Jan 28 10:18:58 crc kubenswrapper[4735]: E0128 10:18:58.360374 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bd34f4169b56a78716db5600837f3ebabe2ae13358a36b07cc69b2b881e9d9f\": container with ID starting with 0bd34f4169b56a78716db5600837f3ebabe2ae13358a36b07cc69b2b881e9d9f not found: ID does not exist" containerID="0bd34f4169b56a78716db5600837f3ebabe2ae13358a36b07cc69b2b881e9d9f" Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.360415 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bd34f4169b56a78716db5600837f3ebabe2ae13358a36b07cc69b2b881e9d9f"} err="failed to get container status \"0bd34f4169b56a78716db5600837f3ebabe2ae13358a36b07cc69b2b881e9d9f\": rpc error: code = NotFound desc = could not find container \"0bd34f4169b56a78716db5600837f3ebabe2ae13358a36b07cc69b2b881e9d9f\": container with ID starting with 0bd34f4169b56a78716db5600837f3ebabe2ae13358a36b07cc69b2b881e9d9f not found: ID does not exist" Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.360444 4735 scope.go:117] "RemoveContainer" containerID="307a3d0b2558598c6db3b7828586ef18b2323459f91d5dec8669bc12108503ca" Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.360846 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b398b0f1-3c91-4249-8644-e91704da8778-kube-api-access-t9z9b" (OuterVolumeSpecName: "kube-api-access-t9z9b") pod "b398b0f1-3c91-4249-8644-e91704da8778" (UID: "b398b0f1-3c91-4249-8644-e91704da8778"). InnerVolumeSpecName "kube-api-access-t9z9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:18:58 crc kubenswrapper[4735]: E0128 10:18:58.360954 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"307a3d0b2558598c6db3b7828586ef18b2323459f91d5dec8669bc12108503ca\": container with ID starting with 307a3d0b2558598c6db3b7828586ef18b2323459f91d5dec8669bc12108503ca not found: ID does not exist" containerID="307a3d0b2558598c6db3b7828586ef18b2323459f91d5dec8669bc12108503ca" Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.360980 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"307a3d0b2558598c6db3b7828586ef18b2323459f91d5dec8669bc12108503ca"} err="failed to get container status \"307a3d0b2558598c6db3b7828586ef18b2323459f91d5dec8669bc12108503ca\": rpc error: code = NotFound desc = could not find container \"307a3d0b2558598c6db3b7828586ef18b2323459f91d5dec8669bc12108503ca\": container with ID starting with 307a3d0b2558598c6db3b7828586ef18b2323459f91d5dec8669bc12108503ca not found: ID does not exist" Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.361000 4735 scope.go:117] "RemoveContainer" containerID="01d662c7def5163c37c35bc3d4b2a6a86a49005afdc87363e7e6c75a847b3966" Jan 28 10:18:58 crc kubenswrapper[4735]: E0128 10:18:58.362107 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01d662c7def5163c37c35bc3d4b2a6a86a49005afdc87363e7e6c75a847b3966\": container with ID starting with 01d662c7def5163c37c35bc3d4b2a6a86a49005afdc87363e7e6c75a847b3966 not found: ID does not exist" containerID="01d662c7def5163c37c35bc3d4b2a6a86a49005afdc87363e7e6c75a847b3966" Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.362145 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d662c7def5163c37c35bc3d4b2a6a86a49005afdc87363e7e6c75a847b3966"} err="failed to get container status \"01d662c7def5163c37c35bc3d4b2a6a86a49005afdc87363e7e6c75a847b3966\": rpc error: code = NotFound desc = could not find container \"01d662c7def5163c37c35bc3d4b2a6a86a49005afdc87363e7e6c75a847b3966\": container with ID starting with 01d662c7def5163c37c35bc3d4b2a6a86a49005afdc87363e7e6c75a847b3966 not found: ID does not exist" Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.381878 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b398b0f1-3c91-4249-8644-e91704da8778-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b398b0f1-3c91-4249-8644-e91704da8778" (UID: "b398b0f1-3c91-4249-8644-e91704da8778"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.456673 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9z9b\" (UniqueName: \"kubernetes.io/projected/b398b0f1-3c91-4249-8644-e91704da8778-kube-api-access-t9z9b\") on node \"crc\" DevicePath \"\"" Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.456973 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b398b0f1-3c91-4249-8644-e91704da8778-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.606529 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jc7rs"] Jan 28 10:18:58 crc kubenswrapper[4735]: I0128 10:18:58.618817 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jc7rs"] Jan 28 10:18:59 crc kubenswrapper[4735]: I0128 10:18:59.516399 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b398b0f1-3c91-4249-8644-e91704da8778" path="/var/lib/kubelet/pods/b398b0f1-3c91-4249-8644-e91704da8778/volumes" Jan 28 10:19:05 crc kubenswrapper[4735]: I0128 10:19:05.431056 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:19:05 crc kubenswrapper[4735]: I0128 10:19:05.431946 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:19:35 crc kubenswrapper[4735]: I0128 10:19:35.429973 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:19:35 crc kubenswrapper[4735]: I0128 10:19:35.430589 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:19:35 crc kubenswrapper[4735]: I0128 10:19:35.430641 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 10:19:35 crc kubenswrapper[4735]: I0128 10:19:35.431572 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde"} pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 10:19:35 crc kubenswrapper[4735]: I0128 10:19:35.431684 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" containerID="cri-o://49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" gracePeriod=600 Jan 28 10:19:35 crc kubenswrapper[4735]: E0128 10:19:35.579833 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:19:35 crc kubenswrapper[4735]: I0128 10:19:35.624153 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" exitCode=0 Jan 28 10:19:35 crc kubenswrapper[4735]: I0128 10:19:35.624205 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerDied","Data":"49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde"} Jan 28 10:19:35 crc kubenswrapper[4735]: I0128 10:19:35.624257 4735 scope.go:117] "RemoveContainer" containerID="0ca59299c75c0261d224f376d225e4cfae82f9e72435765962d543c393946e9a" Jan 28 10:19:35 crc kubenswrapper[4735]: I0128 10:19:35.625423 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:19:35 crc kubenswrapper[4735]: E0128 10:19:35.625849 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:19:48 crc kubenswrapper[4735]: I0128 10:19:48.503019 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:19:48 crc kubenswrapper[4735]: E0128 10:19:48.503854 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:20:03 crc kubenswrapper[4735]: I0128 10:20:03.504319 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:20:03 crc kubenswrapper[4735]: E0128 10:20:03.505427 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:20:15 crc kubenswrapper[4735]: I0128 10:20:15.513724 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:20:15 crc kubenswrapper[4735]: E0128 10:20:15.519404 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:20:29 crc kubenswrapper[4735]: I0128 10:20:29.503294 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:20:29 crc kubenswrapper[4735]: E0128 10:20:29.504374 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:20:41 crc kubenswrapper[4735]: I0128 10:20:41.503266 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:20:41 crc kubenswrapper[4735]: E0128 10:20:41.504212 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:20:52 crc kubenswrapper[4735]: I0128 10:20:52.502770 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:20:52 crc kubenswrapper[4735]: E0128 10:20:52.504264 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:21:03 crc kubenswrapper[4735]: I0128 10:21:03.503782 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:21:03 crc kubenswrapper[4735]: E0128 10:21:03.504629 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:21:14 crc kubenswrapper[4735]: I0128 10:21:14.503946 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:21:14 crc kubenswrapper[4735]: E0128 10:21:14.504787 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:21:26 crc kubenswrapper[4735]: I0128 10:21:26.503110 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:21:26 crc kubenswrapper[4735]: E0128 10:21:26.503798 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:21:37 crc kubenswrapper[4735]: I0128 10:21:37.509164 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:21:37 crc kubenswrapper[4735]: E0128 10:21:37.509998 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:21:50 crc kubenswrapper[4735]: I0128 10:21:50.504902 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:21:50 crc kubenswrapper[4735]: E0128 10:21:50.505989 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:22:03 crc kubenswrapper[4735]: I0128 10:22:03.503179 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:22:03 crc kubenswrapper[4735]: E0128 10:22:03.503883 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:22:09 crc kubenswrapper[4735]: I0128 10:22:09.418615 4735 generic.go:334] "Generic (PLEG): container finished" podID="7c337230-3c7c-49aa-97cb-576327428c01" containerID="b0f5b4ec8646f3ffc374c57bb5af6f2bd3960a28e5dfb64063253008d8f8747a" exitCode=0 Jan 28 10:22:09 crc kubenswrapper[4735]: I0128 10:22:09.418695 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" event={"ID":"7c337230-3c7c-49aa-97cb-576327428c01","Type":"ContainerDied","Data":"b0f5b4ec8646f3ffc374c57bb5af6f2bd3960a28e5dfb64063253008d8f8747a"} Jan 28 10:22:10 crc kubenswrapper[4735]: I0128 10:22:10.866158 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:22:10 crc kubenswrapper[4735]: I0128 10:22:10.978535 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7tgn\" (UniqueName: \"kubernetes.io/projected/7c337230-3c7c-49aa-97cb-576327428c01-kube-api-access-j7tgn\") pod \"7c337230-3c7c-49aa-97cb-576327428c01\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " Jan 28 10:22:10 crc kubenswrapper[4735]: I0128 10:22:10.978710 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-libvirt-combined-ca-bundle\") pod \"7c337230-3c7c-49aa-97cb-576327428c01\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " Jan 28 10:22:10 crc kubenswrapper[4735]: I0128 10:22:10.978874 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-inventory\") pod \"7c337230-3c7c-49aa-97cb-576327428c01\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " Jan 28 10:22:10 crc kubenswrapper[4735]: I0128 10:22:10.978927 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-ssh-key-openstack-edpm-ipam\") pod \"7c337230-3c7c-49aa-97cb-576327428c01\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " Jan 28 10:22:10 crc kubenswrapper[4735]: I0128 10:22:10.978984 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-libvirt-secret-0\") pod \"7c337230-3c7c-49aa-97cb-576327428c01\" (UID: \"7c337230-3c7c-49aa-97cb-576327428c01\") " Jan 28 10:22:10 crc kubenswrapper[4735]: I0128 10:22:10.984387 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7c337230-3c7c-49aa-97cb-576327428c01" (UID: "7c337230-3c7c-49aa-97cb-576327428c01"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:22:10 crc kubenswrapper[4735]: I0128 10:22:10.984670 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c337230-3c7c-49aa-97cb-576327428c01-kube-api-access-j7tgn" (OuterVolumeSpecName: "kube-api-access-j7tgn") pod "7c337230-3c7c-49aa-97cb-576327428c01" (UID: "7c337230-3c7c-49aa-97cb-576327428c01"). InnerVolumeSpecName "kube-api-access-j7tgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.005230 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7c337230-3c7c-49aa-97cb-576327428c01" (UID: "7c337230-3c7c-49aa-97cb-576327428c01"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.007133 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-inventory" (OuterVolumeSpecName: "inventory") pod "7c337230-3c7c-49aa-97cb-576327428c01" (UID: "7c337230-3c7c-49aa-97cb-576327428c01"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.010354 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "7c337230-3c7c-49aa-97cb-576327428c01" (UID: "7c337230-3c7c-49aa-97cb-576327428c01"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.080923 4735 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.080973 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.080992 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.081006 4735 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7c337230-3c7c-49aa-97cb-576327428c01-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.081020 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7tgn\" (UniqueName: \"kubernetes.io/projected/7c337230-3c7c-49aa-97cb-576327428c01-kube-api-access-j7tgn\") on node \"crc\" DevicePath \"\"" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.439587 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" event={"ID":"7c337230-3c7c-49aa-97cb-576327428c01","Type":"ContainerDied","Data":"5778a44943303d10623dcddb545f952b1c050316e0cbeeaaa7ae3e09558900dc"} Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.439631 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5778a44943303d10623dcddb545f952b1c050316e0cbeeaaa7ae3e09558900dc" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.439674 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.541295 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm"] Jan 28 10:22:11 crc kubenswrapper[4735]: E0128 10:22:11.541663 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b398b0f1-3c91-4249-8644-e91704da8778" containerName="registry-server" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.541678 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b398b0f1-3c91-4249-8644-e91704da8778" containerName="registry-server" Jan 28 10:22:11 crc kubenswrapper[4735]: E0128 10:22:11.541709 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b398b0f1-3c91-4249-8644-e91704da8778" containerName="extract-utilities" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.541718 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b398b0f1-3c91-4249-8644-e91704da8778" containerName="extract-utilities" Jan 28 10:22:11 crc kubenswrapper[4735]: E0128 10:22:11.541736 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c337230-3c7c-49aa-97cb-576327428c01" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.541765 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c337230-3c7c-49aa-97cb-576327428c01" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 10:22:11 crc kubenswrapper[4735]: E0128 10:22:11.541786 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b398b0f1-3c91-4249-8644-e91704da8778" containerName="extract-content" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.541794 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b398b0f1-3c91-4249-8644-e91704da8778" containerName="extract-content" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.542033 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b398b0f1-3c91-4249-8644-e91704da8778" containerName="registry-server" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.542059 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c337230-3c7c-49aa-97cb-576327428c01" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.542667 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.547625 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.548011 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.548246 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j996r" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.548430 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.548911 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.548958 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.548925 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.556905 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm"] Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.692666 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.692845 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.692890 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.692931 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.692959 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.693070 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.693114 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.693163 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntxm7\" (UniqueName: \"kubernetes.io/projected/71c10727-e356-4d4a-a316-b48e9d5756bd-kube-api-access-ntxm7\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.693452 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.795157 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.795513 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.795794 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntxm7\" (UniqueName: \"kubernetes.io/projected/71c10727-e356-4d4a-a316-b48e9d5756bd-kube-api-access-ntxm7\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.795957 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.796112 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.796226 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.796352 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.796476 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.796658 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.800236 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.800231 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.800883 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.800998 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.802420 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.802442 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.803149 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.806093 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.813353 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntxm7\" (UniqueName: \"kubernetes.io/projected/71c10727-e356-4d4a-a316-b48e9d5756bd-kube-api-access-ntxm7\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dzfkm\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:11 crc kubenswrapper[4735]: I0128 10:22:11.866132 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:22:12 crc kubenswrapper[4735]: I0128 10:22:12.431222 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm"] Jan 28 10:22:12 crc kubenswrapper[4735]: I0128 10:22:12.440200 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 10:22:12 crc kubenswrapper[4735]: I0128 10:22:12.449514 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" event={"ID":"71c10727-e356-4d4a-a316-b48e9d5756bd","Type":"ContainerStarted","Data":"f3a3e895285966be6de8bbe8f8648d98253201ac532e230fb97e9845fc2187a6"} Jan 28 10:22:13 crc kubenswrapper[4735]: I0128 10:22:13.460692 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" event={"ID":"71c10727-e356-4d4a-a316-b48e9d5756bd","Type":"ContainerStarted","Data":"d1ee1171539103c35064ed9f7642dae3414f1859bb7179d1c74d4eef977fc782"} Jan 28 10:22:15 crc kubenswrapper[4735]: I0128 10:22:15.503350 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:22:15 crc kubenswrapper[4735]: E0128 10:22:15.504321 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:22:26 crc kubenswrapper[4735]: I0128 10:22:26.503549 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:22:26 crc kubenswrapper[4735]: E0128 10:22:26.504866 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:22:41 crc kubenswrapper[4735]: I0128 10:22:41.503263 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:22:41 crc kubenswrapper[4735]: E0128 10:22:41.504215 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:22:53 crc kubenswrapper[4735]: I0128 10:22:53.504510 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:22:53 crc kubenswrapper[4735]: E0128 10:22:53.505511 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:23:04 crc kubenswrapper[4735]: I0128 10:23:04.502513 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:23:04 crc kubenswrapper[4735]: E0128 10:23:04.503296 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:23:19 crc kubenswrapper[4735]: I0128 10:23:19.504348 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:23:19 crc kubenswrapper[4735]: E0128 10:23:19.504951 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:23:30 crc kubenswrapper[4735]: I0128 10:23:30.502920 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:23:30 crc kubenswrapper[4735]: E0128 10:23:30.503661 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:23:44 crc kubenswrapper[4735]: I0128 10:23:44.503127 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:23:44 crc kubenswrapper[4735]: E0128 10:23:44.505048 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:23:59 crc kubenswrapper[4735]: I0128 10:23:59.502453 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:23:59 crc kubenswrapper[4735]: E0128 10:23:59.503526 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:24:12 crc kubenswrapper[4735]: I0128 10:24:12.502562 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:24:12 crc kubenswrapper[4735]: E0128 10:24:12.503370 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:24:24 crc kubenswrapper[4735]: I0128 10:24:24.503119 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:24:24 crc kubenswrapper[4735]: E0128 10:24:24.504272 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:24:31 crc kubenswrapper[4735]: I0128 10:24:31.768290 4735 generic.go:334] "Generic (PLEG): container finished" podID="71c10727-e356-4d4a-a316-b48e9d5756bd" containerID="d1ee1171539103c35064ed9f7642dae3414f1859bb7179d1c74d4eef977fc782" exitCode=0 Jan 28 10:24:31 crc kubenswrapper[4735]: I0128 10:24:31.768803 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" event={"ID":"71c10727-e356-4d4a-a316-b48e9d5756bd","Type":"ContainerDied","Data":"d1ee1171539103c35064ed9f7642dae3414f1859bb7179d1c74d4eef977fc782"} Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.213357 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.399999 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-combined-ca-bundle\") pod \"71c10727-e356-4d4a-a316-b48e9d5756bd\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.400076 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntxm7\" (UniqueName: \"kubernetes.io/projected/71c10727-e356-4d4a-a316-b48e9d5756bd-kube-api-access-ntxm7\") pod \"71c10727-e356-4d4a-a316-b48e9d5756bd\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.400141 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-extra-config-0\") pod \"71c10727-e356-4d4a-a316-b48e9d5756bd\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.400195 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-migration-ssh-key-0\") pod \"71c10727-e356-4d4a-a316-b48e9d5756bd\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.400213 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-migration-ssh-key-1\") pod \"71c10727-e356-4d4a-a316-b48e9d5756bd\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.400265 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-ssh-key-openstack-edpm-ipam\") pod \"71c10727-e356-4d4a-a316-b48e9d5756bd\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.400341 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-cell1-compute-config-0\") pod \"71c10727-e356-4d4a-a316-b48e9d5756bd\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.400426 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-cell1-compute-config-1\") pod \"71c10727-e356-4d4a-a316-b48e9d5756bd\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.400474 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-inventory\") pod \"71c10727-e356-4d4a-a316-b48e9d5756bd\" (UID: \"71c10727-e356-4d4a-a316-b48e9d5756bd\") " Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.408174 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "71c10727-e356-4d4a-a316-b48e9d5756bd" (UID: "71c10727-e356-4d4a-a316-b48e9d5756bd"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.408202 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c10727-e356-4d4a-a316-b48e9d5756bd-kube-api-access-ntxm7" (OuterVolumeSpecName: "kube-api-access-ntxm7") pod "71c10727-e356-4d4a-a316-b48e9d5756bd" (UID: "71c10727-e356-4d4a-a316-b48e9d5756bd"). InnerVolumeSpecName "kube-api-access-ntxm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.426053 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "71c10727-e356-4d4a-a316-b48e9d5756bd" (UID: "71c10727-e356-4d4a-a316-b48e9d5756bd"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.429792 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "71c10727-e356-4d4a-a316-b48e9d5756bd" (UID: "71c10727-e356-4d4a-a316-b48e9d5756bd"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.430269 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "71c10727-e356-4d4a-a316-b48e9d5756bd" (UID: "71c10727-e356-4d4a-a316-b48e9d5756bd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.430516 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "71c10727-e356-4d4a-a316-b48e9d5756bd" (UID: "71c10727-e356-4d4a-a316-b48e9d5756bd"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.437390 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "71c10727-e356-4d4a-a316-b48e9d5756bd" (UID: "71c10727-e356-4d4a-a316-b48e9d5756bd"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.443570 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-inventory" (OuterVolumeSpecName: "inventory") pod "71c10727-e356-4d4a-a316-b48e9d5756bd" (UID: "71c10727-e356-4d4a-a316-b48e9d5756bd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.449000 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "71c10727-e356-4d4a-a316-b48e9d5756bd" (UID: "71c10727-e356-4d4a-a316-b48e9d5756bd"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.502403 4735 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.502442 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntxm7\" (UniqueName: \"kubernetes.io/projected/71c10727-e356-4d4a-a316-b48e9d5756bd-kube-api-access-ntxm7\") on node \"crc\" DevicePath \"\"" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.502467 4735 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.502476 4735 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.502563 4735 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.502583 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.502597 4735 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.502612 4735 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.502624 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71c10727-e356-4d4a-a316-b48e9d5756bd-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.787163 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" event={"ID":"71c10727-e356-4d4a-a316-b48e9d5756bd","Type":"ContainerDied","Data":"f3a3e895285966be6de8bbe8f8648d98253201ac532e230fb97e9845fc2187a6"} Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.787198 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3a3e895285966be6de8bbe8f8648d98253201ac532e230fb97e9845fc2187a6" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.787237 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dzfkm" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.897697 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk"] Jan 28 10:24:33 crc kubenswrapper[4735]: E0128 10:24:33.900380 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c10727-e356-4d4a-a316-b48e9d5756bd" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.900410 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c10727-e356-4d4a-a316-b48e9d5756bd" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.900615 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c10727-e356-4d4a-a316-b48e9d5756bd" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.901409 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.904284 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.904800 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-j996r" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.905066 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.905214 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.906950 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 10:24:33 crc kubenswrapper[4735]: I0128 10:24:33.914185 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk"] Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.011959 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.012241 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc52g\" (UniqueName: \"kubernetes.io/projected/05947d1f-31c3-4046-b5e5-c70249faa781-kube-api-access-tc52g\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.012274 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.012311 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.012420 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.012464 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.012682 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.114608 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.114696 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.114731 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.114775 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.114861 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.114924 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.115006 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc52g\" (UniqueName: \"kubernetes.io/projected/05947d1f-31c3-4046-b5e5-c70249faa781-kube-api-access-tc52g\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.118967 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.119689 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.120206 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.120368 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.125143 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.127009 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.146554 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc52g\" (UniqueName: \"kubernetes.io/projected/05947d1f-31c3-4046-b5e5-c70249faa781-kube-api-access-tc52g\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f24zk\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.228687 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.769887 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk"] Jan 28 10:24:34 crc kubenswrapper[4735]: I0128 10:24:34.797006 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" event={"ID":"05947d1f-31c3-4046-b5e5-c70249faa781","Type":"ContainerStarted","Data":"af4848f91a5848323f55fe004c8b1cd86a799e8a3dfae25dd32fc7c4163a0d49"} Jan 28 10:24:35 crc kubenswrapper[4735]: I0128 10:24:35.504001 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:24:35 crc kubenswrapper[4735]: E0128 10:24:35.504984 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:24:35 crc kubenswrapper[4735]: I0128 10:24:35.807570 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" event={"ID":"05947d1f-31c3-4046-b5e5-c70249faa781","Type":"ContainerStarted","Data":"1ca703f9a62a9c5033a4462276dd5e2220290c93d20b879a53532b763c035c9e"} Jan 28 10:24:35 crc kubenswrapper[4735]: I0128 10:24:35.830641 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" podStartSLOduration=2.383613642 podStartE2EDuration="2.830613809s" podCreationTimestamp="2026-01-28 10:24:33 +0000 UTC" firstStartedPulling="2026-01-28 10:24:34.775539169 +0000 UTC m=+2527.925203542" lastFinishedPulling="2026-01-28 10:24:35.222539316 +0000 UTC m=+2528.372203709" observedRunningTime="2026-01-28 10:24:35.827102752 +0000 UTC m=+2528.976767155" watchObservedRunningTime="2026-01-28 10:24:35.830613809 +0000 UTC m=+2528.980278192" Jan 28 10:24:50 crc kubenswrapper[4735]: I0128 10:24:50.503134 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:24:50 crc kubenswrapper[4735]: I0128 10:24:50.955001 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"dbfa7576c396dfe90ddc7f77124fc640faab1a0d28cc7b69eb0aff2b00d1d8d0"} Jan 28 10:25:23 crc kubenswrapper[4735]: I0128 10:25:23.004197 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n4t2t"] Jan 28 10:25:23 crc kubenswrapper[4735]: I0128 10:25:23.007489 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n4t2t" Jan 28 10:25:23 crc kubenswrapper[4735]: I0128 10:25:23.016730 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n4t2t"] Jan 28 10:25:23 crc kubenswrapper[4735]: I0128 10:25:23.040494 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cb33f89-d2a2-4947-aeda-65c1518beb1a-utilities\") pod \"community-operators-n4t2t\" (UID: \"7cb33f89-d2a2-4947-aeda-65c1518beb1a\") " pod="openshift-marketplace/community-operators-n4t2t" Jan 28 10:25:23 crc kubenswrapper[4735]: I0128 10:25:23.040740 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkklt\" (UniqueName: \"kubernetes.io/projected/7cb33f89-d2a2-4947-aeda-65c1518beb1a-kube-api-access-mkklt\") pod \"community-operators-n4t2t\" (UID: \"7cb33f89-d2a2-4947-aeda-65c1518beb1a\") " pod="openshift-marketplace/community-operators-n4t2t" Jan 28 10:25:23 crc kubenswrapper[4735]: I0128 10:25:23.040846 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cb33f89-d2a2-4947-aeda-65c1518beb1a-catalog-content\") pod \"community-operators-n4t2t\" (UID: \"7cb33f89-d2a2-4947-aeda-65c1518beb1a\") " pod="openshift-marketplace/community-operators-n4t2t" Jan 28 10:25:23 crc kubenswrapper[4735]: I0128 10:25:23.142504 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cb33f89-d2a2-4947-aeda-65c1518beb1a-catalog-content\") pod \"community-operators-n4t2t\" (UID: \"7cb33f89-d2a2-4947-aeda-65c1518beb1a\") " pod="openshift-marketplace/community-operators-n4t2t" Jan 28 10:25:23 crc kubenswrapper[4735]: I0128 10:25:23.142624 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cb33f89-d2a2-4947-aeda-65c1518beb1a-utilities\") pod \"community-operators-n4t2t\" (UID: \"7cb33f89-d2a2-4947-aeda-65c1518beb1a\") " pod="openshift-marketplace/community-operators-n4t2t" Jan 28 10:25:23 crc kubenswrapper[4735]: I0128 10:25:23.142760 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkklt\" (UniqueName: \"kubernetes.io/projected/7cb33f89-d2a2-4947-aeda-65c1518beb1a-kube-api-access-mkklt\") pod \"community-operators-n4t2t\" (UID: \"7cb33f89-d2a2-4947-aeda-65c1518beb1a\") " pod="openshift-marketplace/community-operators-n4t2t" Jan 28 10:25:23 crc kubenswrapper[4735]: I0128 10:25:23.143278 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cb33f89-d2a2-4947-aeda-65c1518beb1a-catalog-content\") pod \"community-operators-n4t2t\" (UID: \"7cb33f89-d2a2-4947-aeda-65c1518beb1a\") " pod="openshift-marketplace/community-operators-n4t2t" Jan 28 10:25:23 crc kubenswrapper[4735]: I0128 10:25:23.143324 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cb33f89-d2a2-4947-aeda-65c1518beb1a-utilities\") pod \"community-operators-n4t2t\" (UID: \"7cb33f89-d2a2-4947-aeda-65c1518beb1a\") " pod="openshift-marketplace/community-operators-n4t2t" Jan 28 10:25:23 crc kubenswrapper[4735]: I0128 10:25:23.165303 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkklt\" (UniqueName: \"kubernetes.io/projected/7cb33f89-d2a2-4947-aeda-65c1518beb1a-kube-api-access-mkklt\") pod \"community-operators-n4t2t\" (UID: \"7cb33f89-d2a2-4947-aeda-65c1518beb1a\") " pod="openshift-marketplace/community-operators-n4t2t" Jan 28 10:25:23 crc kubenswrapper[4735]: I0128 10:25:23.327770 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n4t2t" Jan 28 10:25:23 crc kubenswrapper[4735]: I0128 10:25:23.875637 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n4t2t"] Jan 28 10:25:24 crc kubenswrapper[4735]: I0128 10:25:24.276419 4735 generic.go:334] "Generic (PLEG): container finished" podID="7cb33f89-d2a2-4947-aeda-65c1518beb1a" containerID="bc1a1e9bb9a96043cc5409414b53bfdcbfeaa9c8576888631707073254a5954b" exitCode=0 Jan 28 10:25:24 crc kubenswrapper[4735]: I0128 10:25:24.276551 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4t2t" event={"ID":"7cb33f89-d2a2-4947-aeda-65c1518beb1a","Type":"ContainerDied","Data":"bc1a1e9bb9a96043cc5409414b53bfdcbfeaa9c8576888631707073254a5954b"} Jan 28 10:25:24 crc kubenswrapper[4735]: I0128 10:25:24.277894 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4t2t" event={"ID":"7cb33f89-d2a2-4947-aeda-65c1518beb1a","Type":"ContainerStarted","Data":"da636f229dd7ea71849ec4a87bcc53cd2f7b19a7a498904b0e63b98109f37d27"} Jan 28 10:25:25 crc kubenswrapper[4735]: I0128 10:25:25.287625 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4t2t" event={"ID":"7cb33f89-d2a2-4947-aeda-65c1518beb1a","Type":"ContainerStarted","Data":"672924822c9d1a3dc0914f1b041fb1951527700c7b0b3997dacf8efbb061cb31"} Jan 28 10:25:26 crc kubenswrapper[4735]: I0128 10:25:26.299087 4735 generic.go:334] "Generic (PLEG): container finished" podID="7cb33f89-d2a2-4947-aeda-65c1518beb1a" containerID="672924822c9d1a3dc0914f1b041fb1951527700c7b0b3997dacf8efbb061cb31" exitCode=0 Jan 28 10:25:26 crc kubenswrapper[4735]: I0128 10:25:26.299182 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4t2t" event={"ID":"7cb33f89-d2a2-4947-aeda-65c1518beb1a","Type":"ContainerDied","Data":"672924822c9d1a3dc0914f1b041fb1951527700c7b0b3997dacf8efbb061cb31"} Jan 28 10:25:27 crc kubenswrapper[4735]: I0128 10:25:27.309910 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4t2t" event={"ID":"7cb33f89-d2a2-4947-aeda-65c1518beb1a","Type":"ContainerStarted","Data":"5ea869e257b81a9a0c4f768adeb6c54b95622d95f8d7bc4721c2593e58552b30"} Jan 28 10:25:27 crc kubenswrapper[4735]: I0128 10:25:27.341684 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n4t2t" podStartSLOduration=2.911482543 podStartE2EDuration="5.341659805s" podCreationTimestamp="2026-01-28 10:25:22 +0000 UTC" firstStartedPulling="2026-01-28 10:25:24.279565586 +0000 UTC m=+2577.429229959" lastFinishedPulling="2026-01-28 10:25:26.709742848 +0000 UTC m=+2579.859407221" observedRunningTime="2026-01-28 10:25:27.331891406 +0000 UTC m=+2580.481555809" watchObservedRunningTime="2026-01-28 10:25:27.341659805 +0000 UTC m=+2580.491324188" Jan 28 10:25:33 crc kubenswrapper[4735]: I0128 10:25:33.328638 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n4t2t" Jan 28 10:25:33 crc kubenswrapper[4735]: I0128 10:25:33.329266 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n4t2t" Jan 28 10:25:33 crc kubenswrapper[4735]: I0128 10:25:33.379387 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n4t2t" Jan 28 10:25:33 crc kubenswrapper[4735]: I0128 10:25:33.426225 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n4t2t" Jan 28 10:25:33 crc kubenswrapper[4735]: I0128 10:25:33.615504 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n4t2t"] Jan 28 10:25:35 crc kubenswrapper[4735]: I0128 10:25:35.383399 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n4t2t" podUID="7cb33f89-d2a2-4947-aeda-65c1518beb1a" containerName="registry-server" containerID="cri-o://5ea869e257b81a9a0c4f768adeb6c54b95622d95f8d7bc4721c2593e58552b30" gracePeriod=2 Jan 28 10:25:35 crc kubenswrapper[4735]: I0128 10:25:35.820432 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n4t2t" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.003469 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkklt\" (UniqueName: \"kubernetes.io/projected/7cb33f89-d2a2-4947-aeda-65c1518beb1a-kube-api-access-mkklt\") pod \"7cb33f89-d2a2-4947-aeda-65c1518beb1a\" (UID: \"7cb33f89-d2a2-4947-aeda-65c1518beb1a\") " Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.003632 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cb33f89-d2a2-4947-aeda-65c1518beb1a-catalog-content\") pod \"7cb33f89-d2a2-4947-aeda-65c1518beb1a\" (UID: \"7cb33f89-d2a2-4947-aeda-65c1518beb1a\") " Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.003666 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cb33f89-d2a2-4947-aeda-65c1518beb1a-utilities\") pod \"7cb33f89-d2a2-4947-aeda-65c1518beb1a\" (UID: \"7cb33f89-d2a2-4947-aeda-65c1518beb1a\") " Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.004564 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cb33f89-d2a2-4947-aeda-65c1518beb1a-utilities" (OuterVolumeSpecName: "utilities") pod "7cb33f89-d2a2-4947-aeda-65c1518beb1a" (UID: "7cb33f89-d2a2-4947-aeda-65c1518beb1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.010659 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb33f89-d2a2-4947-aeda-65c1518beb1a-kube-api-access-mkklt" (OuterVolumeSpecName: "kube-api-access-mkklt") pod "7cb33f89-d2a2-4947-aeda-65c1518beb1a" (UID: "7cb33f89-d2a2-4947-aeda-65c1518beb1a"). InnerVolumeSpecName "kube-api-access-mkklt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.034825 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2dg7d"] Jan 28 10:25:36 crc kubenswrapper[4735]: E0128 10:25:36.035281 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb33f89-d2a2-4947-aeda-65c1518beb1a" containerName="extract-utilities" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.035306 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb33f89-d2a2-4947-aeda-65c1518beb1a" containerName="extract-utilities" Jan 28 10:25:36 crc kubenswrapper[4735]: E0128 10:25:36.035327 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb33f89-d2a2-4947-aeda-65c1518beb1a" containerName="registry-server" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.035335 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb33f89-d2a2-4947-aeda-65c1518beb1a" containerName="registry-server" Jan 28 10:25:36 crc kubenswrapper[4735]: E0128 10:25:36.035361 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb33f89-d2a2-4947-aeda-65c1518beb1a" containerName="extract-content" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.035369 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb33f89-d2a2-4947-aeda-65c1518beb1a" containerName="extract-content" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.035574 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb33f89-d2a2-4947-aeda-65c1518beb1a" containerName="registry-server" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.037397 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dg7d" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.052145 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2dg7d"] Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.066082 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cb33f89-d2a2-4947-aeda-65c1518beb1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7cb33f89-d2a2-4947-aeda-65c1518beb1a" (UID: "7cb33f89-d2a2-4947-aeda-65c1518beb1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.106973 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkklt\" (UniqueName: \"kubernetes.io/projected/7cb33f89-d2a2-4947-aeda-65c1518beb1a-kube-api-access-mkklt\") on node \"crc\" DevicePath \"\"" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.107524 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cb33f89-d2a2-4947-aeda-65c1518beb1a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.107536 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cb33f89-d2a2-4947-aeda-65c1518beb1a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.210219 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/704286d7-789c-45b7-948b-249d4fbb05ef-utilities\") pod \"certified-operators-2dg7d\" (UID: \"704286d7-789c-45b7-948b-249d4fbb05ef\") " pod="openshift-marketplace/certified-operators-2dg7d" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.210355 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/704286d7-789c-45b7-948b-249d4fbb05ef-catalog-content\") pod \"certified-operators-2dg7d\" (UID: \"704286d7-789c-45b7-948b-249d4fbb05ef\") " pod="openshift-marketplace/certified-operators-2dg7d" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.210416 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbrsr\" (UniqueName: \"kubernetes.io/projected/704286d7-789c-45b7-948b-249d4fbb05ef-kube-api-access-lbrsr\") pod \"certified-operators-2dg7d\" (UID: \"704286d7-789c-45b7-948b-249d4fbb05ef\") " pod="openshift-marketplace/certified-operators-2dg7d" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.312028 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/704286d7-789c-45b7-948b-249d4fbb05ef-catalog-content\") pod \"certified-operators-2dg7d\" (UID: \"704286d7-789c-45b7-948b-249d4fbb05ef\") " pod="openshift-marketplace/certified-operators-2dg7d" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.312144 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbrsr\" (UniqueName: \"kubernetes.io/projected/704286d7-789c-45b7-948b-249d4fbb05ef-kube-api-access-lbrsr\") pod \"certified-operators-2dg7d\" (UID: \"704286d7-789c-45b7-948b-249d4fbb05ef\") " pod="openshift-marketplace/certified-operators-2dg7d" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.312238 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/704286d7-789c-45b7-948b-249d4fbb05ef-utilities\") pod \"certified-operators-2dg7d\" (UID: \"704286d7-789c-45b7-948b-249d4fbb05ef\") " pod="openshift-marketplace/certified-operators-2dg7d" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.312690 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/704286d7-789c-45b7-948b-249d4fbb05ef-catalog-content\") pod \"certified-operators-2dg7d\" (UID: \"704286d7-789c-45b7-948b-249d4fbb05ef\") " pod="openshift-marketplace/certified-operators-2dg7d" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.312800 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/704286d7-789c-45b7-948b-249d4fbb05ef-utilities\") pod \"certified-operators-2dg7d\" (UID: \"704286d7-789c-45b7-948b-249d4fbb05ef\") " pod="openshift-marketplace/certified-operators-2dg7d" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.339588 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbrsr\" (UniqueName: \"kubernetes.io/projected/704286d7-789c-45b7-948b-249d4fbb05ef-kube-api-access-lbrsr\") pod \"certified-operators-2dg7d\" (UID: \"704286d7-789c-45b7-948b-249d4fbb05ef\") " pod="openshift-marketplace/certified-operators-2dg7d" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.363844 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dg7d" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.395834 4735 generic.go:334] "Generic (PLEG): container finished" podID="7cb33f89-d2a2-4947-aeda-65c1518beb1a" containerID="5ea869e257b81a9a0c4f768adeb6c54b95622d95f8d7bc4721c2593e58552b30" exitCode=0 Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.395874 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4t2t" event={"ID":"7cb33f89-d2a2-4947-aeda-65c1518beb1a","Type":"ContainerDied","Data":"5ea869e257b81a9a0c4f768adeb6c54b95622d95f8d7bc4721c2593e58552b30"} Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.395898 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n4t2t" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.395919 4735 scope.go:117] "RemoveContainer" containerID="5ea869e257b81a9a0c4f768adeb6c54b95622d95f8d7bc4721c2593e58552b30" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.395905 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4t2t" event={"ID":"7cb33f89-d2a2-4947-aeda-65c1518beb1a","Type":"ContainerDied","Data":"da636f229dd7ea71849ec4a87bcc53cd2f7b19a7a498904b0e63b98109f37d27"} Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.440199 4735 scope.go:117] "RemoveContainer" containerID="672924822c9d1a3dc0914f1b041fb1951527700c7b0b3997dacf8efbb061cb31" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.448799 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n4t2t"] Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.459644 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n4t2t"] Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.473444 4735 scope.go:117] "RemoveContainer" containerID="bc1a1e9bb9a96043cc5409414b53bfdcbfeaa9c8576888631707073254a5954b" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.554916 4735 scope.go:117] "RemoveContainer" containerID="5ea869e257b81a9a0c4f768adeb6c54b95622d95f8d7bc4721c2593e58552b30" Jan 28 10:25:36 crc kubenswrapper[4735]: E0128 10:25:36.558291 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ea869e257b81a9a0c4f768adeb6c54b95622d95f8d7bc4721c2593e58552b30\": container with ID starting with 5ea869e257b81a9a0c4f768adeb6c54b95622d95f8d7bc4721c2593e58552b30 not found: ID does not exist" containerID="5ea869e257b81a9a0c4f768adeb6c54b95622d95f8d7bc4721c2593e58552b30" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.558352 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ea869e257b81a9a0c4f768adeb6c54b95622d95f8d7bc4721c2593e58552b30"} err="failed to get container status \"5ea869e257b81a9a0c4f768adeb6c54b95622d95f8d7bc4721c2593e58552b30\": rpc error: code = NotFound desc = could not find container \"5ea869e257b81a9a0c4f768adeb6c54b95622d95f8d7bc4721c2593e58552b30\": container with ID starting with 5ea869e257b81a9a0c4f768adeb6c54b95622d95f8d7bc4721c2593e58552b30 not found: ID does not exist" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.558375 4735 scope.go:117] "RemoveContainer" containerID="672924822c9d1a3dc0914f1b041fb1951527700c7b0b3997dacf8efbb061cb31" Jan 28 10:25:36 crc kubenswrapper[4735]: E0128 10:25:36.561995 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672924822c9d1a3dc0914f1b041fb1951527700c7b0b3997dacf8efbb061cb31\": container with ID starting with 672924822c9d1a3dc0914f1b041fb1951527700c7b0b3997dacf8efbb061cb31 not found: ID does not exist" containerID="672924822c9d1a3dc0914f1b041fb1951527700c7b0b3997dacf8efbb061cb31" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.562027 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672924822c9d1a3dc0914f1b041fb1951527700c7b0b3997dacf8efbb061cb31"} err="failed to get container status \"672924822c9d1a3dc0914f1b041fb1951527700c7b0b3997dacf8efbb061cb31\": rpc error: code = NotFound desc = could not find container \"672924822c9d1a3dc0914f1b041fb1951527700c7b0b3997dacf8efbb061cb31\": container with ID starting with 672924822c9d1a3dc0914f1b041fb1951527700c7b0b3997dacf8efbb061cb31 not found: ID does not exist" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.562042 4735 scope.go:117] "RemoveContainer" containerID="bc1a1e9bb9a96043cc5409414b53bfdcbfeaa9c8576888631707073254a5954b" Jan 28 10:25:36 crc kubenswrapper[4735]: E0128 10:25:36.562226 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc1a1e9bb9a96043cc5409414b53bfdcbfeaa9c8576888631707073254a5954b\": container with ID starting with bc1a1e9bb9a96043cc5409414b53bfdcbfeaa9c8576888631707073254a5954b not found: ID does not exist" containerID="bc1a1e9bb9a96043cc5409414b53bfdcbfeaa9c8576888631707073254a5954b" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.562249 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc1a1e9bb9a96043cc5409414b53bfdcbfeaa9c8576888631707073254a5954b"} err="failed to get container status \"bc1a1e9bb9a96043cc5409414b53bfdcbfeaa9c8576888631707073254a5954b\": rpc error: code = NotFound desc = could not find container \"bc1a1e9bb9a96043cc5409414b53bfdcbfeaa9c8576888631707073254a5954b\": container with ID starting with bc1a1e9bb9a96043cc5409414b53bfdcbfeaa9c8576888631707073254a5954b not found: ID does not exist" Jan 28 10:25:36 crc kubenswrapper[4735]: I0128 10:25:36.902885 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2dg7d"] Jan 28 10:25:36 crc kubenswrapper[4735]: W0128 10:25:36.921062 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod704286d7_789c_45b7_948b_249d4fbb05ef.slice/crio-cbd525e4f0e06b78a6e21e16c65bacfc8edec5eb8300fd5fe5c8efc083f6e3b5 WatchSource:0}: Error finding container cbd525e4f0e06b78a6e21e16c65bacfc8edec5eb8300fd5fe5c8efc083f6e3b5: Status 404 returned error can't find the container with id cbd525e4f0e06b78a6e21e16c65bacfc8edec5eb8300fd5fe5c8efc083f6e3b5 Jan 28 10:25:37 crc kubenswrapper[4735]: I0128 10:25:37.405471 4735 generic.go:334] "Generic (PLEG): container finished" podID="704286d7-789c-45b7-948b-249d4fbb05ef" containerID="daea7c7febbe87c4e2bbc697143ea9ca976e3e27fde5f379c41c40d277b70193" exitCode=0 Jan 28 10:25:37 crc kubenswrapper[4735]: I0128 10:25:37.405531 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dg7d" event={"ID":"704286d7-789c-45b7-948b-249d4fbb05ef","Type":"ContainerDied","Data":"daea7c7febbe87c4e2bbc697143ea9ca976e3e27fde5f379c41c40d277b70193"} Jan 28 10:25:37 crc kubenswrapper[4735]: I0128 10:25:37.405555 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dg7d" event={"ID":"704286d7-789c-45b7-948b-249d4fbb05ef","Type":"ContainerStarted","Data":"cbd525e4f0e06b78a6e21e16c65bacfc8edec5eb8300fd5fe5c8efc083f6e3b5"} Jan 28 10:25:37 crc kubenswrapper[4735]: I0128 10:25:37.521054 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb33f89-d2a2-4947-aeda-65c1518beb1a" path="/var/lib/kubelet/pods/7cb33f89-d2a2-4947-aeda-65c1518beb1a/volumes" Jan 28 10:25:38 crc kubenswrapper[4735]: I0128 10:25:38.421058 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dg7d" event={"ID":"704286d7-789c-45b7-948b-249d4fbb05ef","Type":"ContainerStarted","Data":"950f5df90e865cd9532eb46846ca90ee74b6dff6fbabd35e1f3dde2503369be0"} Jan 28 10:25:39 crc kubenswrapper[4735]: I0128 10:25:39.529202 4735 generic.go:334] "Generic (PLEG): container finished" podID="704286d7-789c-45b7-948b-249d4fbb05ef" containerID="950f5df90e865cd9532eb46846ca90ee74b6dff6fbabd35e1f3dde2503369be0" exitCode=0 Jan 28 10:25:39 crc kubenswrapper[4735]: I0128 10:25:39.529320 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dg7d" event={"ID":"704286d7-789c-45b7-948b-249d4fbb05ef","Type":"ContainerDied","Data":"950f5df90e865cd9532eb46846ca90ee74b6dff6fbabd35e1f3dde2503369be0"} Jan 28 10:25:41 crc kubenswrapper[4735]: I0128 10:25:41.551240 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dg7d" event={"ID":"704286d7-789c-45b7-948b-249d4fbb05ef","Type":"ContainerStarted","Data":"92b1d35259f4deb463878702745160625bf0276e3595f67dd74596621fd9a2c5"} Jan 28 10:25:41 crc kubenswrapper[4735]: I0128 10:25:41.580102 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2dg7d" podStartSLOduration=2.736780009 podStartE2EDuration="5.580065225s" podCreationTimestamp="2026-01-28 10:25:36 +0000 UTC" firstStartedPulling="2026-01-28 10:25:37.407561631 +0000 UTC m=+2590.557226004" lastFinishedPulling="2026-01-28 10:25:40.250846857 +0000 UTC m=+2593.400511220" observedRunningTime="2026-01-28 10:25:41.573709 +0000 UTC m=+2594.723373373" watchObservedRunningTime="2026-01-28 10:25:41.580065225 +0000 UTC m=+2594.729729598" Jan 28 10:25:46 crc kubenswrapper[4735]: I0128 10:25:46.364692 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2dg7d" Jan 28 10:25:46 crc kubenswrapper[4735]: I0128 10:25:46.365291 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2dg7d" Jan 28 10:25:46 crc kubenswrapper[4735]: I0128 10:25:46.458484 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2dg7d" Jan 28 10:25:46 crc kubenswrapper[4735]: I0128 10:25:46.645044 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2dg7d" Jan 28 10:25:46 crc kubenswrapper[4735]: I0128 10:25:46.713602 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2dg7d"] Jan 28 10:25:48 crc kubenswrapper[4735]: I0128 10:25:48.624108 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2dg7d" podUID="704286d7-789c-45b7-948b-249d4fbb05ef" containerName="registry-server" containerID="cri-o://92b1d35259f4deb463878702745160625bf0276e3595f67dd74596621fd9a2c5" gracePeriod=2 Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.119293 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dg7d" Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.216690 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/704286d7-789c-45b7-948b-249d4fbb05ef-catalog-content\") pod \"704286d7-789c-45b7-948b-249d4fbb05ef\" (UID: \"704286d7-789c-45b7-948b-249d4fbb05ef\") " Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.217311 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/704286d7-789c-45b7-948b-249d4fbb05ef-utilities\") pod \"704286d7-789c-45b7-948b-249d4fbb05ef\" (UID: \"704286d7-789c-45b7-948b-249d4fbb05ef\") " Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.217412 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbrsr\" (UniqueName: \"kubernetes.io/projected/704286d7-789c-45b7-948b-249d4fbb05ef-kube-api-access-lbrsr\") pod \"704286d7-789c-45b7-948b-249d4fbb05ef\" (UID: \"704286d7-789c-45b7-948b-249d4fbb05ef\") " Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.218215 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/704286d7-789c-45b7-948b-249d4fbb05ef-utilities" (OuterVolumeSpecName: "utilities") pod "704286d7-789c-45b7-948b-249d4fbb05ef" (UID: "704286d7-789c-45b7-948b-249d4fbb05ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.224556 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/704286d7-789c-45b7-948b-249d4fbb05ef-kube-api-access-lbrsr" (OuterVolumeSpecName: "kube-api-access-lbrsr") pod "704286d7-789c-45b7-948b-249d4fbb05ef" (UID: "704286d7-789c-45b7-948b-249d4fbb05ef"). InnerVolumeSpecName "kube-api-access-lbrsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.260702 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/704286d7-789c-45b7-948b-249d4fbb05ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "704286d7-789c-45b7-948b-249d4fbb05ef" (UID: "704286d7-789c-45b7-948b-249d4fbb05ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.320149 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/704286d7-789c-45b7-948b-249d4fbb05ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.320202 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbrsr\" (UniqueName: \"kubernetes.io/projected/704286d7-789c-45b7-948b-249d4fbb05ef-kube-api-access-lbrsr\") on node \"crc\" DevicePath \"\"" Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.320410 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/704286d7-789c-45b7-948b-249d4fbb05ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.636524 4735 generic.go:334] "Generic (PLEG): container finished" podID="704286d7-789c-45b7-948b-249d4fbb05ef" containerID="92b1d35259f4deb463878702745160625bf0276e3595f67dd74596621fd9a2c5" exitCode=0 Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.636575 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dg7d" event={"ID":"704286d7-789c-45b7-948b-249d4fbb05ef","Type":"ContainerDied","Data":"92b1d35259f4deb463878702745160625bf0276e3595f67dd74596621fd9a2c5"} Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.636587 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dg7d" Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.636614 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dg7d" event={"ID":"704286d7-789c-45b7-948b-249d4fbb05ef","Type":"ContainerDied","Data":"cbd525e4f0e06b78a6e21e16c65bacfc8edec5eb8300fd5fe5c8efc083f6e3b5"} Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.636637 4735 scope.go:117] "RemoveContainer" containerID="92b1d35259f4deb463878702745160625bf0276e3595f67dd74596621fd9a2c5" Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.668805 4735 scope.go:117] "RemoveContainer" containerID="950f5df90e865cd9532eb46846ca90ee74b6dff6fbabd35e1f3dde2503369be0" Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.670853 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2dg7d"] Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.683974 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2dg7d"] Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.691000 4735 scope.go:117] "RemoveContainer" containerID="daea7c7febbe87c4e2bbc697143ea9ca976e3e27fde5f379c41c40d277b70193" Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.741363 4735 scope.go:117] "RemoveContainer" containerID="92b1d35259f4deb463878702745160625bf0276e3595f67dd74596621fd9a2c5" Jan 28 10:25:49 crc kubenswrapper[4735]: E0128 10:25:49.742203 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b1d35259f4deb463878702745160625bf0276e3595f67dd74596621fd9a2c5\": container with ID starting with 92b1d35259f4deb463878702745160625bf0276e3595f67dd74596621fd9a2c5 not found: ID does not exist" containerID="92b1d35259f4deb463878702745160625bf0276e3595f67dd74596621fd9a2c5" Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.742318 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b1d35259f4deb463878702745160625bf0276e3595f67dd74596621fd9a2c5"} err="failed to get container status \"92b1d35259f4deb463878702745160625bf0276e3595f67dd74596621fd9a2c5\": rpc error: code = NotFound desc = could not find container \"92b1d35259f4deb463878702745160625bf0276e3595f67dd74596621fd9a2c5\": container with ID starting with 92b1d35259f4deb463878702745160625bf0276e3595f67dd74596621fd9a2c5 not found: ID does not exist" Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.742379 4735 scope.go:117] "RemoveContainer" containerID="950f5df90e865cd9532eb46846ca90ee74b6dff6fbabd35e1f3dde2503369be0" Jan 28 10:25:49 crc kubenswrapper[4735]: E0128 10:25:49.743435 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"950f5df90e865cd9532eb46846ca90ee74b6dff6fbabd35e1f3dde2503369be0\": container with ID starting with 950f5df90e865cd9532eb46846ca90ee74b6dff6fbabd35e1f3dde2503369be0 not found: ID does not exist" containerID="950f5df90e865cd9532eb46846ca90ee74b6dff6fbabd35e1f3dde2503369be0" Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.743497 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"950f5df90e865cd9532eb46846ca90ee74b6dff6fbabd35e1f3dde2503369be0"} err="failed to get container status \"950f5df90e865cd9532eb46846ca90ee74b6dff6fbabd35e1f3dde2503369be0\": rpc error: code = NotFound desc = could not find container \"950f5df90e865cd9532eb46846ca90ee74b6dff6fbabd35e1f3dde2503369be0\": container with ID starting with 950f5df90e865cd9532eb46846ca90ee74b6dff6fbabd35e1f3dde2503369be0 not found: ID does not exist" Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.743568 4735 scope.go:117] "RemoveContainer" containerID="daea7c7febbe87c4e2bbc697143ea9ca976e3e27fde5f379c41c40d277b70193" Jan 28 10:25:49 crc kubenswrapper[4735]: E0128 10:25:49.744797 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daea7c7febbe87c4e2bbc697143ea9ca976e3e27fde5f379c41c40d277b70193\": container with ID starting with daea7c7febbe87c4e2bbc697143ea9ca976e3e27fde5f379c41c40d277b70193 not found: ID does not exist" containerID="daea7c7febbe87c4e2bbc697143ea9ca976e3e27fde5f379c41c40d277b70193" Jan 28 10:25:49 crc kubenswrapper[4735]: I0128 10:25:49.744841 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daea7c7febbe87c4e2bbc697143ea9ca976e3e27fde5f379c41c40d277b70193"} err="failed to get container status \"daea7c7febbe87c4e2bbc697143ea9ca976e3e27fde5f379c41c40d277b70193\": rpc error: code = NotFound desc = could not find container \"daea7c7febbe87c4e2bbc697143ea9ca976e3e27fde5f379c41c40d277b70193\": container with ID starting with daea7c7febbe87c4e2bbc697143ea9ca976e3e27fde5f379c41c40d277b70193 not found: ID does not exist" Jan 28 10:25:51 crc kubenswrapper[4735]: I0128 10:25:51.529258 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="704286d7-789c-45b7-948b-249d4fbb05ef" path="/var/lib/kubelet/pods/704286d7-789c-45b7-948b-249d4fbb05ef/volumes" Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.012356 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9wlm7"] Jan 28 10:26:10 crc kubenswrapper[4735]: E0128 10:26:10.013686 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="704286d7-789c-45b7-948b-249d4fbb05ef" containerName="extract-content" Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.013704 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="704286d7-789c-45b7-948b-249d4fbb05ef" containerName="extract-content" Jan 28 10:26:10 crc kubenswrapper[4735]: E0128 10:26:10.013716 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="704286d7-789c-45b7-948b-249d4fbb05ef" containerName="registry-server" Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.013722 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="704286d7-789c-45b7-948b-249d4fbb05ef" containerName="registry-server" Jan 28 10:26:10 crc kubenswrapper[4735]: E0128 10:26:10.013736 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="704286d7-789c-45b7-948b-249d4fbb05ef" containerName="extract-utilities" Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.013763 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="704286d7-789c-45b7-948b-249d4fbb05ef" containerName="extract-utilities" Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.013936 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="704286d7-789c-45b7-948b-249d4fbb05ef" containerName="registry-server" Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.015341 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wlm7" Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.051979 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9wlm7"] Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.131874 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnszv\" (UniqueName: \"kubernetes.io/projected/b41e4a67-46aa-41a0-b422-f2c61c4742bd-kube-api-access-pnszv\") pod \"redhat-operators-9wlm7\" (UID: \"b41e4a67-46aa-41a0-b422-f2c61c4742bd\") " pod="openshift-marketplace/redhat-operators-9wlm7" Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.131952 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b41e4a67-46aa-41a0-b422-f2c61c4742bd-utilities\") pod \"redhat-operators-9wlm7\" (UID: \"b41e4a67-46aa-41a0-b422-f2c61c4742bd\") " pod="openshift-marketplace/redhat-operators-9wlm7" Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.132193 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b41e4a67-46aa-41a0-b422-f2c61c4742bd-catalog-content\") pod \"redhat-operators-9wlm7\" (UID: \"b41e4a67-46aa-41a0-b422-f2c61c4742bd\") " pod="openshift-marketplace/redhat-operators-9wlm7" Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.235134 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b41e4a67-46aa-41a0-b422-f2c61c4742bd-utilities\") pod \"redhat-operators-9wlm7\" (UID: \"b41e4a67-46aa-41a0-b422-f2c61c4742bd\") " pod="openshift-marketplace/redhat-operators-9wlm7" Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.235260 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b41e4a67-46aa-41a0-b422-f2c61c4742bd-catalog-content\") pod \"redhat-operators-9wlm7\" (UID: \"b41e4a67-46aa-41a0-b422-f2c61c4742bd\") " pod="openshift-marketplace/redhat-operators-9wlm7" Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.235449 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnszv\" (UniqueName: \"kubernetes.io/projected/b41e4a67-46aa-41a0-b422-f2c61c4742bd-kube-api-access-pnszv\") pod \"redhat-operators-9wlm7\" (UID: \"b41e4a67-46aa-41a0-b422-f2c61c4742bd\") " pod="openshift-marketplace/redhat-operators-9wlm7" Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.235896 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b41e4a67-46aa-41a0-b422-f2c61c4742bd-utilities\") pod \"redhat-operators-9wlm7\" (UID: \"b41e4a67-46aa-41a0-b422-f2c61c4742bd\") " pod="openshift-marketplace/redhat-operators-9wlm7" Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.235907 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b41e4a67-46aa-41a0-b422-f2c61c4742bd-catalog-content\") pod \"redhat-operators-9wlm7\" (UID: \"b41e4a67-46aa-41a0-b422-f2c61c4742bd\") " pod="openshift-marketplace/redhat-operators-9wlm7" Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.264490 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnszv\" (UniqueName: \"kubernetes.io/projected/b41e4a67-46aa-41a0-b422-f2c61c4742bd-kube-api-access-pnszv\") pod \"redhat-operators-9wlm7\" (UID: \"b41e4a67-46aa-41a0-b422-f2c61c4742bd\") " pod="openshift-marketplace/redhat-operators-9wlm7" Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.345000 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wlm7" Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.827513 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9wlm7"] Jan 28 10:26:10 crc kubenswrapper[4735]: I0128 10:26:10.844960 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wlm7" event={"ID":"b41e4a67-46aa-41a0-b422-f2c61c4742bd","Type":"ContainerStarted","Data":"771ff6638fc1e89ccf764afd1f3312682aeb1fc8fa2f0cc895e5be4b38d133bd"} Jan 28 10:26:11 crc kubenswrapper[4735]: I0128 10:26:11.861442 4735 generic.go:334] "Generic (PLEG): container finished" podID="b41e4a67-46aa-41a0-b422-f2c61c4742bd" containerID="75d1deea69855129091e9d54bf58451648c844a7e7bff1b6c7671f153851fbd9" exitCode=0 Jan 28 10:26:11 crc kubenswrapper[4735]: I0128 10:26:11.861570 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wlm7" event={"ID":"b41e4a67-46aa-41a0-b422-f2c61c4742bd","Type":"ContainerDied","Data":"75d1deea69855129091e9d54bf58451648c844a7e7bff1b6c7671f153851fbd9"} Jan 28 10:26:13 crc kubenswrapper[4735]: I0128 10:26:13.879956 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wlm7" event={"ID":"b41e4a67-46aa-41a0-b422-f2c61c4742bd","Type":"ContainerStarted","Data":"ec49d53dd90eddeee4539fded1a22e54d9670f48f1fe16caf5cc2a696379468a"} Jan 28 10:26:15 crc kubenswrapper[4735]: I0128 10:26:15.898476 4735 generic.go:334] "Generic (PLEG): container finished" podID="b41e4a67-46aa-41a0-b422-f2c61c4742bd" containerID="ec49d53dd90eddeee4539fded1a22e54d9670f48f1fe16caf5cc2a696379468a" exitCode=0 Jan 28 10:26:15 crc kubenswrapper[4735]: I0128 10:26:15.898556 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wlm7" event={"ID":"b41e4a67-46aa-41a0-b422-f2c61c4742bd","Type":"ContainerDied","Data":"ec49d53dd90eddeee4539fded1a22e54d9670f48f1fe16caf5cc2a696379468a"} Jan 28 10:26:16 crc kubenswrapper[4735]: I0128 10:26:16.913588 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wlm7" event={"ID":"b41e4a67-46aa-41a0-b422-f2c61c4742bd","Type":"ContainerStarted","Data":"d5537c316281d8c660c74c8a7f8f694e4e418fac694f4dd5d4723cfb5d9433d7"} Jan 28 10:26:16 crc kubenswrapper[4735]: I0128 10:26:16.940394 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9wlm7" podStartSLOduration=3.223025525 podStartE2EDuration="7.940377813s" podCreationTimestamp="2026-01-28 10:26:09 +0000 UTC" firstStartedPulling="2026-01-28 10:26:11.867995743 +0000 UTC m=+2625.017660126" lastFinishedPulling="2026-01-28 10:26:16.585348041 +0000 UTC m=+2629.735012414" observedRunningTime="2026-01-28 10:26:16.934105468 +0000 UTC m=+2630.083769891" watchObservedRunningTime="2026-01-28 10:26:16.940377813 +0000 UTC m=+2630.090042186" Jan 28 10:26:20 crc kubenswrapper[4735]: I0128 10:26:20.346954 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9wlm7" Jan 28 10:26:20 crc kubenswrapper[4735]: I0128 10:26:20.347376 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9wlm7" Jan 28 10:26:21 crc kubenswrapper[4735]: I0128 10:26:21.393426 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9wlm7" podUID="b41e4a67-46aa-41a0-b422-f2c61c4742bd" containerName="registry-server" probeResult="failure" output=< Jan 28 10:26:21 crc kubenswrapper[4735]: timeout: failed to connect service ":50051" within 1s Jan 28 10:26:21 crc kubenswrapper[4735]: > Jan 28 10:26:30 crc kubenswrapper[4735]: I0128 10:26:30.399835 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9wlm7" Jan 28 10:26:30 crc kubenswrapper[4735]: I0128 10:26:30.449332 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9wlm7" Jan 28 10:26:30 crc kubenswrapper[4735]: I0128 10:26:30.632950 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9wlm7"] Jan 28 10:26:32 crc kubenswrapper[4735]: I0128 10:26:32.052914 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9wlm7" podUID="b41e4a67-46aa-41a0-b422-f2c61c4742bd" containerName="registry-server" containerID="cri-o://d5537c316281d8c660c74c8a7f8f694e4e418fac694f4dd5d4723cfb5d9433d7" gracePeriod=2 Jan 28 10:26:32 crc kubenswrapper[4735]: I0128 10:26:32.484365 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wlm7" Jan 28 10:26:32 crc kubenswrapper[4735]: I0128 10:26:32.520646 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b41e4a67-46aa-41a0-b422-f2c61c4742bd-catalog-content\") pod \"b41e4a67-46aa-41a0-b422-f2c61c4742bd\" (UID: \"b41e4a67-46aa-41a0-b422-f2c61c4742bd\") " Jan 28 10:26:32 crc kubenswrapper[4735]: I0128 10:26:32.520930 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b41e4a67-46aa-41a0-b422-f2c61c4742bd-utilities\") pod \"b41e4a67-46aa-41a0-b422-f2c61c4742bd\" (UID: \"b41e4a67-46aa-41a0-b422-f2c61c4742bd\") " Jan 28 10:26:32 crc kubenswrapper[4735]: I0128 10:26:32.521127 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnszv\" (UniqueName: \"kubernetes.io/projected/b41e4a67-46aa-41a0-b422-f2c61c4742bd-kube-api-access-pnszv\") pod \"b41e4a67-46aa-41a0-b422-f2c61c4742bd\" (UID: \"b41e4a67-46aa-41a0-b422-f2c61c4742bd\") " Jan 28 10:26:32 crc kubenswrapper[4735]: I0128 10:26:32.523491 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b41e4a67-46aa-41a0-b422-f2c61c4742bd-utilities" (OuterVolumeSpecName: "utilities") pod "b41e4a67-46aa-41a0-b422-f2c61c4742bd" (UID: "b41e4a67-46aa-41a0-b422-f2c61c4742bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:26:32 crc kubenswrapper[4735]: I0128 10:26:32.533706 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b41e4a67-46aa-41a0-b422-f2c61c4742bd-kube-api-access-pnszv" (OuterVolumeSpecName: "kube-api-access-pnszv") pod "b41e4a67-46aa-41a0-b422-f2c61c4742bd" (UID: "b41e4a67-46aa-41a0-b422-f2c61c4742bd"). InnerVolumeSpecName "kube-api-access-pnszv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:26:32 crc kubenswrapper[4735]: I0128 10:26:32.627127 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b41e4a67-46aa-41a0-b422-f2c61c4742bd-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:26:32 crc kubenswrapper[4735]: I0128 10:26:32.627160 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnszv\" (UniqueName: \"kubernetes.io/projected/b41e4a67-46aa-41a0-b422-f2c61c4742bd-kube-api-access-pnszv\") on node \"crc\" DevicePath \"\"" Jan 28 10:26:32 crc kubenswrapper[4735]: I0128 10:26:32.680404 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b41e4a67-46aa-41a0-b422-f2c61c4742bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b41e4a67-46aa-41a0-b422-f2c61c4742bd" (UID: "b41e4a67-46aa-41a0-b422-f2c61c4742bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:26:32 crc kubenswrapper[4735]: I0128 10:26:32.728941 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b41e4a67-46aa-41a0-b422-f2c61c4742bd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:26:33 crc kubenswrapper[4735]: I0128 10:26:33.067885 4735 generic.go:334] "Generic (PLEG): container finished" podID="b41e4a67-46aa-41a0-b422-f2c61c4742bd" containerID="d5537c316281d8c660c74c8a7f8f694e4e418fac694f4dd5d4723cfb5d9433d7" exitCode=0 Jan 28 10:26:33 crc kubenswrapper[4735]: I0128 10:26:33.068169 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wlm7" event={"ID":"b41e4a67-46aa-41a0-b422-f2c61c4742bd","Type":"ContainerDied","Data":"d5537c316281d8c660c74c8a7f8f694e4e418fac694f4dd5d4723cfb5d9433d7"} Jan 28 10:26:33 crc kubenswrapper[4735]: I0128 10:26:33.068216 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wlm7" event={"ID":"b41e4a67-46aa-41a0-b422-f2c61c4742bd","Type":"ContainerDied","Data":"771ff6638fc1e89ccf764afd1f3312682aeb1fc8fa2f0cc895e5be4b38d133bd"} Jan 28 10:26:33 crc kubenswrapper[4735]: I0128 10:26:33.068243 4735 scope.go:117] "RemoveContainer" containerID="d5537c316281d8c660c74c8a7f8f694e4e418fac694f4dd5d4723cfb5d9433d7" Jan 28 10:26:33 crc kubenswrapper[4735]: I0128 10:26:33.068399 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wlm7" Jan 28 10:26:33 crc kubenswrapper[4735]: I0128 10:26:33.106666 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9wlm7"] Jan 28 10:26:33 crc kubenswrapper[4735]: I0128 10:26:33.109376 4735 scope.go:117] "RemoveContainer" containerID="ec49d53dd90eddeee4539fded1a22e54d9670f48f1fe16caf5cc2a696379468a" Jan 28 10:26:33 crc kubenswrapper[4735]: I0128 10:26:33.125496 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9wlm7"] Jan 28 10:26:33 crc kubenswrapper[4735]: I0128 10:26:33.137560 4735 scope.go:117] "RemoveContainer" containerID="75d1deea69855129091e9d54bf58451648c844a7e7bff1b6c7671f153851fbd9" Jan 28 10:26:33 crc kubenswrapper[4735]: I0128 10:26:33.179165 4735 scope.go:117] "RemoveContainer" containerID="d5537c316281d8c660c74c8a7f8f694e4e418fac694f4dd5d4723cfb5d9433d7" Jan 28 10:26:33 crc kubenswrapper[4735]: E0128 10:26:33.179686 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5537c316281d8c660c74c8a7f8f694e4e418fac694f4dd5d4723cfb5d9433d7\": container with ID starting with d5537c316281d8c660c74c8a7f8f694e4e418fac694f4dd5d4723cfb5d9433d7 not found: ID does not exist" containerID="d5537c316281d8c660c74c8a7f8f694e4e418fac694f4dd5d4723cfb5d9433d7" Jan 28 10:26:33 crc kubenswrapper[4735]: I0128 10:26:33.179740 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5537c316281d8c660c74c8a7f8f694e4e418fac694f4dd5d4723cfb5d9433d7"} err="failed to get container status \"d5537c316281d8c660c74c8a7f8f694e4e418fac694f4dd5d4723cfb5d9433d7\": rpc error: code = NotFound desc = could not find container \"d5537c316281d8c660c74c8a7f8f694e4e418fac694f4dd5d4723cfb5d9433d7\": container with ID starting with d5537c316281d8c660c74c8a7f8f694e4e418fac694f4dd5d4723cfb5d9433d7 not found: ID does not exist" Jan 28 10:26:33 crc kubenswrapper[4735]: I0128 10:26:33.179816 4735 scope.go:117] "RemoveContainer" containerID="ec49d53dd90eddeee4539fded1a22e54d9670f48f1fe16caf5cc2a696379468a" Jan 28 10:26:33 crc kubenswrapper[4735]: E0128 10:26:33.180140 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec49d53dd90eddeee4539fded1a22e54d9670f48f1fe16caf5cc2a696379468a\": container with ID starting with ec49d53dd90eddeee4539fded1a22e54d9670f48f1fe16caf5cc2a696379468a not found: ID does not exist" containerID="ec49d53dd90eddeee4539fded1a22e54d9670f48f1fe16caf5cc2a696379468a" Jan 28 10:26:33 crc kubenswrapper[4735]: I0128 10:26:33.180182 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec49d53dd90eddeee4539fded1a22e54d9670f48f1fe16caf5cc2a696379468a"} err="failed to get container status \"ec49d53dd90eddeee4539fded1a22e54d9670f48f1fe16caf5cc2a696379468a\": rpc error: code = NotFound desc = could not find container \"ec49d53dd90eddeee4539fded1a22e54d9670f48f1fe16caf5cc2a696379468a\": container with ID starting with ec49d53dd90eddeee4539fded1a22e54d9670f48f1fe16caf5cc2a696379468a not found: ID does not exist" Jan 28 10:26:33 crc kubenswrapper[4735]: I0128 10:26:33.180208 4735 scope.go:117] "RemoveContainer" containerID="75d1deea69855129091e9d54bf58451648c844a7e7bff1b6c7671f153851fbd9" Jan 28 10:26:33 crc kubenswrapper[4735]: E0128 10:26:33.180470 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75d1deea69855129091e9d54bf58451648c844a7e7bff1b6c7671f153851fbd9\": container with ID starting with 75d1deea69855129091e9d54bf58451648c844a7e7bff1b6c7671f153851fbd9 not found: ID does not exist" containerID="75d1deea69855129091e9d54bf58451648c844a7e7bff1b6c7671f153851fbd9" Jan 28 10:26:33 crc kubenswrapper[4735]: I0128 10:26:33.180511 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75d1deea69855129091e9d54bf58451648c844a7e7bff1b6c7671f153851fbd9"} err="failed to get container status \"75d1deea69855129091e9d54bf58451648c844a7e7bff1b6c7671f153851fbd9\": rpc error: code = NotFound desc = could not find container \"75d1deea69855129091e9d54bf58451648c844a7e7bff1b6c7671f153851fbd9\": container with ID starting with 75d1deea69855129091e9d54bf58451648c844a7e7bff1b6c7671f153851fbd9 not found: ID does not exist" Jan 28 10:26:33 crc kubenswrapper[4735]: I0128 10:26:33.517362 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b41e4a67-46aa-41a0-b422-f2c61c4742bd" path="/var/lib/kubelet/pods/b41e4a67-46aa-41a0-b422-f2c61c4742bd/volumes" Jan 28 10:27:05 crc kubenswrapper[4735]: I0128 10:27:05.430359 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:27:05 crc kubenswrapper[4735]: I0128 10:27:05.430820 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:27:08 crc kubenswrapper[4735]: I0128 10:27:08.431007 4735 generic.go:334] "Generic (PLEG): container finished" podID="05947d1f-31c3-4046-b5e5-c70249faa781" containerID="1ca703f9a62a9c5033a4462276dd5e2220290c93d20b879a53532b763c035c9e" exitCode=0 Jan 28 10:27:08 crc kubenswrapper[4735]: I0128 10:27:08.431117 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" event={"ID":"05947d1f-31c3-4046-b5e5-c70249faa781","Type":"ContainerDied","Data":"1ca703f9a62a9c5033a4462276dd5e2220290c93d20b879a53532b763c035c9e"} Jan 28 10:27:09 crc kubenswrapper[4735]: I0128 10:27:09.938927 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.032712 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-telemetry-combined-ca-bundle\") pod \"05947d1f-31c3-4046-b5e5-c70249faa781\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.032815 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc52g\" (UniqueName: \"kubernetes.io/projected/05947d1f-31c3-4046-b5e5-c70249faa781-kube-api-access-tc52g\") pod \"05947d1f-31c3-4046-b5e5-c70249faa781\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.032850 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-0\") pod \"05947d1f-31c3-4046-b5e5-c70249faa781\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.032898 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-inventory\") pod \"05947d1f-31c3-4046-b5e5-c70249faa781\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.033071 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ssh-key-openstack-edpm-ipam\") pod \"05947d1f-31c3-4046-b5e5-c70249faa781\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.033100 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-1\") pod \"05947d1f-31c3-4046-b5e5-c70249faa781\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.033172 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-2\") pod \"05947d1f-31c3-4046-b5e5-c70249faa781\" (UID: \"05947d1f-31c3-4046-b5e5-c70249faa781\") " Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.038132 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "05947d1f-31c3-4046-b5e5-c70249faa781" (UID: "05947d1f-31c3-4046-b5e5-c70249faa781"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.038606 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05947d1f-31c3-4046-b5e5-c70249faa781-kube-api-access-tc52g" (OuterVolumeSpecName: "kube-api-access-tc52g") pod "05947d1f-31c3-4046-b5e5-c70249faa781" (UID: "05947d1f-31c3-4046-b5e5-c70249faa781"). InnerVolumeSpecName "kube-api-access-tc52g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.059783 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "05947d1f-31c3-4046-b5e5-c70249faa781" (UID: "05947d1f-31c3-4046-b5e5-c70249faa781"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.059796 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-inventory" (OuterVolumeSpecName: "inventory") pod "05947d1f-31c3-4046-b5e5-c70249faa781" (UID: "05947d1f-31c3-4046-b5e5-c70249faa781"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.060230 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "05947d1f-31c3-4046-b5e5-c70249faa781" (UID: "05947d1f-31c3-4046-b5e5-c70249faa781"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.062994 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "05947d1f-31c3-4046-b5e5-c70249faa781" (UID: "05947d1f-31c3-4046-b5e5-c70249faa781"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.067864 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "05947d1f-31c3-4046-b5e5-c70249faa781" (UID: "05947d1f-31c3-4046-b5e5-c70249faa781"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.135778 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.135824 4735 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.135838 4735 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.135851 4735 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.135864 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc52g\" (UniqueName: \"kubernetes.io/projected/05947d1f-31c3-4046-b5e5-c70249faa781-kube-api-access-tc52g\") on node \"crc\" DevicePath \"\"" Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.135875 4735 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.135889 4735 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05947d1f-31c3-4046-b5e5-c70249faa781-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.455985 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" event={"ID":"05947d1f-31c3-4046-b5e5-c70249faa781","Type":"ContainerDied","Data":"af4848f91a5848323f55fe004c8b1cd86a799e8a3dfae25dd32fc7c4163a0d49"} Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.456023 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af4848f91a5848323f55fe004c8b1cd86a799e8a3dfae25dd32fc7c4163a0d49" Jan 28 10:27:10 crc kubenswrapper[4735]: I0128 10:27:10.456053 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f24zk" Jan 28 10:27:35 crc kubenswrapper[4735]: I0128 10:27:35.429650 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:27:35 crc kubenswrapper[4735]: I0128 10:27:35.430273 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:28:05 crc kubenswrapper[4735]: I0128 10:28:05.429995 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:28:05 crc kubenswrapper[4735]: I0128 10:28:05.430643 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:28:05 crc kubenswrapper[4735]: I0128 10:28:05.430702 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 10:28:05 crc kubenswrapper[4735]: I0128 10:28:05.431603 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dbfa7576c396dfe90ddc7f77124fc640faab1a0d28cc7b69eb0aff2b00d1d8d0"} pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 10:28:05 crc kubenswrapper[4735]: I0128 10:28:05.431681 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" containerID="cri-o://dbfa7576c396dfe90ddc7f77124fc640faab1a0d28cc7b69eb0aff2b00d1d8d0" gracePeriod=600 Jan 28 10:28:05 crc kubenswrapper[4735]: I0128 10:28:05.969464 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerID="dbfa7576c396dfe90ddc7f77124fc640faab1a0d28cc7b69eb0aff2b00d1d8d0" exitCode=0 Jan 28 10:28:05 crc kubenswrapper[4735]: I0128 10:28:05.969572 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerDied","Data":"dbfa7576c396dfe90ddc7f77124fc640faab1a0d28cc7b69eb0aff2b00d1d8d0"} Jan 28 10:28:05 crc kubenswrapper[4735]: I0128 10:28:05.970663 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd"} Jan 28 10:28:05 crc kubenswrapper[4735]: I0128 10:28:05.970767 4735 scope.go:117] "RemoveContainer" containerID="49fe306c8e448d44d7d023c6fec95ef1abac93d88f8f58e1a66da6ea456eddde" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.325626 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 10:28:13 crc kubenswrapper[4735]: E0128 10:28:13.326769 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41e4a67-46aa-41a0-b422-f2c61c4742bd" containerName="registry-server" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.326787 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41e4a67-46aa-41a0-b422-f2c61c4742bd" containerName="registry-server" Jan 28 10:28:13 crc kubenswrapper[4735]: E0128 10:28:13.326815 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05947d1f-31c3-4046-b5e5-c70249faa781" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.326826 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="05947d1f-31c3-4046-b5e5-c70249faa781" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 10:28:13 crc kubenswrapper[4735]: E0128 10:28:13.326842 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41e4a67-46aa-41a0-b422-f2c61c4742bd" containerName="extract-content" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.326850 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41e4a67-46aa-41a0-b422-f2c61c4742bd" containerName="extract-content" Jan 28 10:28:13 crc kubenswrapper[4735]: E0128 10:28:13.326875 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41e4a67-46aa-41a0-b422-f2c61c4742bd" containerName="extract-utilities" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.326883 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41e4a67-46aa-41a0-b422-f2c61c4742bd" containerName="extract-utilities" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.327101 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41e4a67-46aa-41a0-b422-f2c61c4742bd" containerName="registry-server" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.327205 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="05947d1f-31c3-4046-b5e5-c70249faa781" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.328093 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.330290 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nmp45" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.330614 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.331139 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.332258 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.341442 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.440702 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.440779 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.440813 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5faba03-f0db-452a-bf0c-271d930e27b5-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.440847 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg256\" (UniqueName: \"kubernetes.io/projected/d5faba03-f0db-452a-bf0c-271d930e27b5-kube-api-access-bg256\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.440870 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.441049 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d5faba03-f0db-452a-bf0c-271d930e27b5-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.441082 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5faba03-f0db-452a-bf0c-271d930e27b5-config-data\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.441123 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d5faba03-f0db-452a-bf0c-271d930e27b5-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.441201 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.543827 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.544053 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.544161 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5faba03-f0db-452a-bf0c-271d930e27b5-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.544260 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg256\" (UniqueName: \"kubernetes.io/projected/d5faba03-f0db-452a-bf0c-271d930e27b5-kube-api-access-bg256\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.544339 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.544433 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d5faba03-f0db-452a-bf0c-271d930e27b5-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.544501 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5faba03-f0db-452a-bf0c-271d930e27b5-config-data\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.544583 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d5faba03-f0db-452a-bf0c-271d930e27b5-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.544692 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.545048 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.548705 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d5faba03-f0db-452a-bf0c-271d930e27b5-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.550038 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5faba03-f0db-452a-bf0c-271d930e27b5-config-data\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.550942 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.551289 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5faba03-f0db-452a-bf0c-271d930e27b5-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.551689 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d5faba03-f0db-452a-bf0c-271d930e27b5-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.554611 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.565258 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.567313 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg256\" (UniqueName: \"kubernetes.io/projected/d5faba03-f0db-452a-bf0c-271d930e27b5-kube-api-access-bg256\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.583099 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " pod="openstack/tempest-tests-tempest" Jan 28 10:28:13 crc kubenswrapper[4735]: I0128 10:28:13.654947 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 10:28:14 crc kubenswrapper[4735]: I0128 10:28:14.160402 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 10:28:14 crc kubenswrapper[4735]: W0128 10:28:14.169384 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5faba03_f0db_452a_bf0c_271d930e27b5.slice/crio-d1e5efc4f706606801760e6c19a1022bb45e279044b0761b0c6d7905d1e378f0 WatchSource:0}: Error finding container d1e5efc4f706606801760e6c19a1022bb45e279044b0761b0c6d7905d1e378f0: Status 404 returned error can't find the container with id d1e5efc4f706606801760e6c19a1022bb45e279044b0761b0c6d7905d1e378f0 Jan 28 10:28:14 crc kubenswrapper[4735]: I0128 10:28:14.171882 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 10:28:15 crc kubenswrapper[4735]: I0128 10:28:15.061882 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d5faba03-f0db-452a-bf0c-271d930e27b5","Type":"ContainerStarted","Data":"d1e5efc4f706606801760e6c19a1022bb45e279044b0761b0c6d7905d1e378f0"} Jan 28 10:28:52 crc kubenswrapper[4735]: E0128 10:28:52.983777 4735 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 28 10:28:52 crc kubenswrapper[4735]: E0128 10:28:52.985494 4735 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bg256,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(d5faba03-f0db-452a-bf0c-271d930e27b5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 10:28:52 crc kubenswrapper[4735]: E0128 10:28:52.986814 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="d5faba03-f0db-452a-bf0c-271d930e27b5" Jan 28 10:28:53 crc kubenswrapper[4735]: E0128 10:28:53.423047 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="d5faba03-f0db-452a-bf0c-271d930e27b5" Jan 28 10:29:08 crc kubenswrapper[4735]: I0128 10:29:08.912016 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 28 10:29:10 crc kubenswrapper[4735]: I0128 10:29:10.578091 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d5faba03-f0db-452a-bf0c-271d930e27b5","Type":"ContainerStarted","Data":"8462a6dffa0441a2f47ecf732bfe7d3ffb2dd9799a5816e068dd6475a9945a14"} Jan 28 10:29:10 crc kubenswrapper[4735]: I0128 10:29:10.602559 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.865013883 podStartE2EDuration="58.602538399s" podCreationTimestamp="2026-01-28 10:28:12 +0000 UTC" firstStartedPulling="2026-01-28 10:28:14.171627303 +0000 UTC m=+2747.321291696" lastFinishedPulling="2026-01-28 10:29:08.909151829 +0000 UTC m=+2802.058816212" observedRunningTime="2026-01-28 10:29:10.595413875 +0000 UTC m=+2803.745078258" watchObservedRunningTime="2026-01-28 10:29:10.602538399 +0000 UTC m=+2803.752202782" Jan 28 10:29:26 crc kubenswrapper[4735]: I0128 10:29:26.989259 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hbzqr"] Jan 28 10:29:26 crc kubenswrapper[4735]: I0128 10:29:26.991671 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hbzqr" Jan 28 10:29:27 crc kubenswrapper[4735]: I0128 10:29:27.028142 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hbzqr"] Jan 28 10:29:27 crc kubenswrapper[4735]: I0128 10:29:27.120563 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbsmn\" (UniqueName: \"kubernetes.io/projected/0faeb221-7b37-4b11-a576-629119b93bd6-kube-api-access-vbsmn\") pod \"redhat-marketplace-hbzqr\" (UID: \"0faeb221-7b37-4b11-a576-629119b93bd6\") " pod="openshift-marketplace/redhat-marketplace-hbzqr" Jan 28 10:29:27 crc kubenswrapper[4735]: I0128 10:29:27.120608 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0faeb221-7b37-4b11-a576-629119b93bd6-utilities\") pod \"redhat-marketplace-hbzqr\" (UID: \"0faeb221-7b37-4b11-a576-629119b93bd6\") " pod="openshift-marketplace/redhat-marketplace-hbzqr" Jan 28 10:29:27 crc kubenswrapper[4735]: I0128 10:29:27.120681 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0faeb221-7b37-4b11-a576-629119b93bd6-catalog-content\") pod \"redhat-marketplace-hbzqr\" (UID: \"0faeb221-7b37-4b11-a576-629119b93bd6\") " pod="openshift-marketplace/redhat-marketplace-hbzqr" Jan 28 10:29:27 crc kubenswrapper[4735]: I0128 10:29:27.222928 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbsmn\" (UniqueName: \"kubernetes.io/projected/0faeb221-7b37-4b11-a576-629119b93bd6-kube-api-access-vbsmn\") pod \"redhat-marketplace-hbzqr\" (UID: \"0faeb221-7b37-4b11-a576-629119b93bd6\") " pod="openshift-marketplace/redhat-marketplace-hbzqr" Jan 28 10:29:27 crc kubenswrapper[4735]: I0128 10:29:27.223023 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0faeb221-7b37-4b11-a576-629119b93bd6-utilities\") pod \"redhat-marketplace-hbzqr\" (UID: \"0faeb221-7b37-4b11-a576-629119b93bd6\") " pod="openshift-marketplace/redhat-marketplace-hbzqr" Jan 28 10:29:27 crc kubenswrapper[4735]: I0128 10:29:27.223084 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0faeb221-7b37-4b11-a576-629119b93bd6-catalog-content\") pod \"redhat-marketplace-hbzqr\" (UID: \"0faeb221-7b37-4b11-a576-629119b93bd6\") " pod="openshift-marketplace/redhat-marketplace-hbzqr" Jan 28 10:29:27 crc kubenswrapper[4735]: I0128 10:29:27.223476 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0faeb221-7b37-4b11-a576-629119b93bd6-utilities\") pod \"redhat-marketplace-hbzqr\" (UID: \"0faeb221-7b37-4b11-a576-629119b93bd6\") " pod="openshift-marketplace/redhat-marketplace-hbzqr" Jan 28 10:29:27 crc kubenswrapper[4735]: I0128 10:29:27.223591 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0faeb221-7b37-4b11-a576-629119b93bd6-catalog-content\") pod \"redhat-marketplace-hbzqr\" (UID: \"0faeb221-7b37-4b11-a576-629119b93bd6\") " pod="openshift-marketplace/redhat-marketplace-hbzqr" Jan 28 10:29:27 crc kubenswrapper[4735]: I0128 10:29:27.248773 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbsmn\" (UniqueName: \"kubernetes.io/projected/0faeb221-7b37-4b11-a576-629119b93bd6-kube-api-access-vbsmn\") pod \"redhat-marketplace-hbzqr\" (UID: \"0faeb221-7b37-4b11-a576-629119b93bd6\") " pod="openshift-marketplace/redhat-marketplace-hbzqr" Jan 28 10:29:27 crc kubenswrapper[4735]: I0128 10:29:27.313496 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hbzqr" Jan 28 10:29:27 crc kubenswrapper[4735]: I0128 10:29:27.800546 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hbzqr"] Jan 28 10:29:28 crc kubenswrapper[4735]: I0128 10:29:28.731604 4735 generic.go:334] "Generic (PLEG): container finished" podID="0faeb221-7b37-4b11-a576-629119b93bd6" containerID="08b99530ddd5cc1ecfcf862138972bbef4e94bee21a4ee39826b3dac5a7776d7" exitCode=0 Jan 28 10:29:28 crc kubenswrapper[4735]: I0128 10:29:28.731681 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbzqr" event={"ID":"0faeb221-7b37-4b11-a576-629119b93bd6","Type":"ContainerDied","Data":"08b99530ddd5cc1ecfcf862138972bbef4e94bee21a4ee39826b3dac5a7776d7"} Jan 28 10:29:28 crc kubenswrapper[4735]: I0128 10:29:28.732009 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbzqr" event={"ID":"0faeb221-7b37-4b11-a576-629119b93bd6","Type":"ContainerStarted","Data":"c4b156e01397158bb5350f9e8d663912b23072e0d060d030f6674fd9949b78a8"} Jan 28 10:29:29 crc kubenswrapper[4735]: I0128 10:29:29.741510 4735 generic.go:334] "Generic (PLEG): container finished" podID="0faeb221-7b37-4b11-a576-629119b93bd6" containerID="50e4ecb967936903a02cd451b798c4455fb4af64719a40b6274fbe57f161e339" exitCode=0 Jan 28 10:29:29 crc kubenswrapper[4735]: I0128 10:29:29.741578 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbzqr" event={"ID":"0faeb221-7b37-4b11-a576-629119b93bd6","Type":"ContainerDied","Data":"50e4ecb967936903a02cd451b798c4455fb4af64719a40b6274fbe57f161e339"} Jan 28 10:29:30 crc kubenswrapper[4735]: I0128 10:29:30.754160 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbzqr" event={"ID":"0faeb221-7b37-4b11-a576-629119b93bd6","Type":"ContainerStarted","Data":"d6ba75bbbbfff8ce7ebb4779fd092c95078f0df6bc08511cdde11561a36f6a78"} Jan 28 10:29:30 crc kubenswrapper[4735]: I0128 10:29:30.775897 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hbzqr" podStartSLOduration=3.361489068 podStartE2EDuration="4.775879969s" podCreationTimestamp="2026-01-28 10:29:26 +0000 UTC" firstStartedPulling="2026-01-28 10:29:28.73424821 +0000 UTC m=+2821.883912603" lastFinishedPulling="2026-01-28 10:29:30.148639131 +0000 UTC m=+2823.298303504" observedRunningTime="2026-01-28 10:29:30.769722858 +0000 UTC m=+2823.919387231" watchObservedRunningTime="2026-01-28 10:29:30.775879969 +0000 UTC m=+2823.925544342" Jan 28 10:29:37 crc kubenswrapper[4735]: I0128 10:29:37.314897 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hbzqr" Jan 28 10:29:37 crc kubenswrapper[4735]: I0128 10:29:37.317066 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hbzqr" Jan 28 10:29:38 crc kubenswrapper[4735]: I0128 10:29:38.365536 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hbzqr" podUID="0faeb221-7b37-4b11-a576-629119b93bd6" containerName="registry-server" probeResult="failure" output=< Jan 28 10:29:38 crc kubenswrapper[4735]: timeout: failed to connect service ":50051" within 1s Jan 28 10:29:38 crc kubenswrapper[4735]: > Jan 28 10:29:47 crc kubenswrapper[4735]: I0128 10:29:47.363573 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hbzqr" Jan 28 10:29:47 crc kubenswrapper[4735]: I0128 10:29:47.410875 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hbzqr" Jan 28 10:29:47 crc kubenswrapper[4735]: I0128 10:29:47.608064 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hbzqr"] Jan 28 10:29:48 crc kubenswrapper[4735]: I0128 10:29:48.909126 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hbzqr" podUID="0faeb221-7b37-4b11-a576-629119b93bd6" containerName="registry-server" containerID="cri-o://d6ba75bbbbfff8ce7ebb4779fd092c95078f0df6bc08511cdde11561a36f6a78" gracePeriod=2 Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.396100 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hbzqr" Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.563613 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0faeb221-7b37-4b11-a576-629119b93bd6-utilities\") pod \"0faeb221-7b37-4b11-a576-629119b93bd6\" (UID: \"0faeb221-7b37-4b11-a576-629119b93bd6\") " Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.563822 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0faeb221-7b37-4b11-a576-629119b93bd6-catalog-content\") pod \"0faeb221-7b37-4b11-a576-629119b93bd6\" (UID: \"0faeb221-7b37-4b11-a576-629119b93bd6\") " Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.564039 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbsmn\" (UniqueName: \"kubernetes.io/projected/0faeb221-7b37-4b11-a576-629119b93bd6-kube-api-access-vbsmn\") pod \"0faeb221-7b37-4b11-a576-629119b93bd6\" (UID: \"0faeb221-7b37-4b11-a576-629119b93bd6\") " Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.564593 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0faeb221-7b37-4b11-a576-629119b93bd6-utilities" (OuterVolumeSpecName: "utilities") pod "0faeb221-7b37-4b11-a576-629119b93bd6" (UID: "0faeb221-7b37-4b11-a576-629119b93bd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.564987 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0faeb221-7b37-4b11-a576-629119b93bd6-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.577420 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0faeb221-7b37-4b11-a576-629119b93bd6-kube-api-access-vbsmn" (OuterVolumeSpecName: "kube-api-access-vbsmn") pod "0faeb221-7b37-4b11-a576-629119b93bd6" (UID: "0faeb221-7b37-4b11-a576-629119b93bd6"). InnerVolumeSpecName "kube-api-access-vbsmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.588519 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0faeb221-7b37-4b11-a576-629119b93bd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0faeb221-7b37-4b11-a576-629119b93bd6" (UID: "0faeb221-7b37-4b11-a576-629119b93bd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.666777 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0faeb221-7b37-4b11-a576-629119b93bd6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.666810 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbsmn\" (UniqueName: \"kubernetes.io/projected/0faeb221-7b37-4b11-a576-629119b93bd6-kube-api-access-vbsmn\") on node \"crc\" DevicePath \"\"" Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.920482 4735 generic.go:334] "Generic (PLEG): container finished" podID="0faeb221-7b37-4b11-a576-629119b93bd6" containerID="d6ba75bbbbfff8ce7ebb4779fd092c95078f0df6bc08511cdde11561a36f6a78" exitCode=0 Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.920534 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hbzqr" Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.920552 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbzqr" event={"ID":"0faeb221-7b37-4b11-a576-629119b93bd6","Type":"ContainerDied","Data":"d6ba75bbbbfff8ce7ebb4779fd092c95078f0df6bc08511cdde11561a36f6a78"} Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.922116 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbzqr" event={"ID":"0faeb221-7b37-4b11-a576-629119b93bd6","Type":"ContainerDied","Data":"c4b156e01397158bb5350f9e8d663912b23072e0d060d030f6674fd9949b78a8"} Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.922141 4735 scope.go:117] "RemoveContainer" containerID="d6ba75bbbbfff8ce7ebb4779fd092c95078f0df6bc08511cdde11561a36f6a78" Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.942713 4735 scope.go:117] "RemoveContainer" containerID="50e4ecb967936903a02cd451b798c4455fb4af64719a40b6274fbe57f161e339" Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.974325 4735 scope.go:117] "RemoveContainer" containerID="08b99530ddd5cc1ecfcf862138972bbef4e94bee21a4ee39826b3dac5a7776d7" Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.979702 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hbzqr"] Jan 28 10:29:49 crc kubenswrapper[4735]: I0128 10:29:49.987544 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hbzqr"] Jan 28 10:29:50 crc kubenswrapper[4735]: I0128 10:29:50.014330 4735 scope.go:117] "RemoveContainer" containerID="d6ba75bbbbfff8ce7ebb4779fd092c95078f0df6bc08511cdde11561a36f6a78" Jan 28 10:29:50 crc kubenswrapper[4735]: E0128 10:29:50.014875 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6ba75bbbbfff8ce7ebb4779fd092c95078f0df6bc08511cdde11561a36f6a78\": container with ID starting with d6ba75bbbbfff8ce7ebb4779fd092c95078f0df6bc08511cdde11561a36f6a78 not found: ID does not exist" containerID="d6ba75bbbbfff8ce7ebb4779fd092c95078f0df6bc08511cdde11561a36f6a78" Jan 28 10:29:50 crc kubenswrapper[4735]: I0128 10:29:50.014922 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6ba75bbbbfff8ce7ebb4779fd092c95078f0df6bc08511cdde11561a36f6a78"} err="failed to get container status \"d6ba75bbbbfff8ce7ebb4779fd092c95078f0df6bc08511cdde11561a36f6a78\": rpc error: code = NotFound desc = could not find container \"d6ba75bbbbfff8ce7ebb4779fd092c95078f0df6bc08511cdde11561a36f6a78\": container with ID starting with d6ba75bbbbfff8ce7ebb4779fd092c95078f0df6bc08511cdde11561a36f6a78 not found: ID does not exist" Jan 28 10:29:50 crc kubenswrapper[4735]: I0128 10:29:50.014949 4735 scope.go:117] "RemoveContainer" containerID="50e4ecb967936903a02cd451b798c4455fb4af64719a40b6274fbe57f161e339" Jan 28 10:29:50 crc kubenswrapper[4735]: E0128 10:29:50.015256 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50e4ecb967936903a02cd451b798c4455fb4af64719a40b6274fbe57f161e339\": container with ID starting with 50e4ecb967936903a02cd451b798c4455fb4af64719a40b6274fbe57f161e339 not found: ID does not exist" containerID="50e4ecb967936903a02cd451b798c4455fb4af64719a40b6274fbe57f161e339" Jan 28 10:29:50 crc kubenswrapper[4735]: I0128 10:29:50.015291 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e4ecb967936903a02cd451b798c4455fb4af64719a40b6274fbe57f161e339"} err="failed to get container status \"50e4ecb967936903a02cd451b798c4455fb4af64719a40b6274fbe57f161e339\": rpc error: code = NotFound desc = could not find container \"50e4ecb967936903a02cd451b798c4455fb4af64719a40b6274fbe57f161e339\": container with ID starting with 50e4ecb967936903a02cd451b798c4455fb4af64719a40b6274fbe57f161e339 not found: ID does not exist" Jan 28 10:29:50 crc kubenswrapper[4735]: I0128 10:29:50.015328 4735 scope.go:117] "RemoveContainer" containerID="08b99530ddd5cc1ecfcf862138972bbef4e94bee21a4ee39826b3dac5a7776d7" Jan 28 10:29:50 crc kubenswrapper[4735]: E0128 10:29:50.015596 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08b99530ddd5cc1ecfcf862138972bbef4e94bee21a4ee39826b3dac5a7776d7\": container with ID starting with 08b99530ddd5cc1ecfcf862138972bbef4e94bee21a4ee39826b3dac5a7776d7 not found: ID does not exist" containerID="08b99530ddd5cc1ecfcf862138972bbef4e94bee21a4ee39826b3dac5a7776d7" Jan 28 10:29:50 crc kubenswrapper[4735]: I0128 10:29:50.015634 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b99530ddd5cc1ecfcf862138972bbef4e94bee21a4ee39826b3dac5a7776d7"} err="failed to get container status \"08b99530ddd5cc1ecfcf862138972bbef4e94bee21a4ee39826b3dac5a7776d7\": rpc error: code = NotFound desc = could not find container \"08b99530ddd5cc1ecfcf862138972bbef4e94bee21a4ee39826b3dac5a7776d7\": container with ID starting with 08b99530ddd5cc1ecfcf862138972bbef4e94bee21a4ee39826b3dac5a7776d7 not found: ID does not exist" Jan 28 10:29:51 crc kubenswrapper[4735]: I0128 10:29:51.512517 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0faeb221-7b37-4b11-a576-629119b93bd6" path="/var/lib/kubelet/pods/0faeb221-7b37-4b11-a576-629119b93bd6/volumes" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.149831 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz"] Jan 28 10:30:00 crc kubenswrapper[4735]: E0128 10:30:00.151869 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0faeb221-7b37-4b11-a576-629119b93bd6" containerName="registry-server" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.151968 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="0faeb221-7b37-4b11-a576-629119b93bd6" containerName="registry-server" Jan 28 10:30:00 crc kubenswrapper[4735]: E0128 10:30:00.152080 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0faeb221-7b37-4b11-a576-629119b93bd6" containerName="extract-content" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.152159 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="0faeb221-7b37-4b11-a576-629119b93bd6" containerName="extract-content" Jan 28 10:30:00 crc kubenswrapper[4735]: E0128 10:30:00.152237 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0faeb221-7b37-4b11-a576-629119b93bd6" containerName="extract-utilities" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.152291 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="0faeb221-7b37-4b11-a576-629119b93bd6" containerName="extract-utilities" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.152525 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="0faeb221-7b37-4b11-a576-629119b93bd6" containerName="registry-server" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.153220 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.155127 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.156112 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.170078 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz"] Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.261318 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f88f46f6-1318-4598-93ce-9602f77f3892-config-volume\") pod \"collect-profiles-29493270-rjrvz\" (UID: \"f88f46f6-1318-4598-93ce-9602f77f3892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.261392 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z7dq\" (UniqueName: \"kubernetes.io/projected/f88f46f6-1318-4598-93ce-9602f77f3892-kube-api-access-4z7dq\") pod \"collect-profiles-29493270-rjrvz\" (UID: \"f88f46f6-1318-4598-93ce-9602f77f3892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.261491 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f88f46f6-1318-4598-93ce-9602f77f3892-secret-volume\") pod \"collect-profiles-29493270-rjrvz\" (UID: \"f88f46f6-1318-4598-93ce-9602f77f3892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.362845 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f88f46f6-1318-4598-93ce-9602f77f3892-secret-volume\") pod \"collect-profiles-29493270-rjrvz\" (UID: \"f88f46f6-1318-4598-93ce-9602f77f3892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.362986 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f88f46f6-1318-4598-93ce-9602f77f3892-config-volume\") pod \"collect-profiles-29493270-rjrvz\" (UID: \"f88f46f6-1318-4598-93ce-9602f77f3892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.363029 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z7dq\" (UniqueName: \"kubernetes.io/projected/f88f46f6-1318-4598-93ce-9602f77f3892-kube-api-access-4z7dq\") pod \"collect-profiles-29493270-rjrvz\" (UID: \"f88f46f6-1318-4598-93ce-9602f77f3892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.364053 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f88f46f6-1318-4598-93ce-9602f77f3892-config-volume\") pod \"collect-profiles-29493270-rjrvz\" (UID: \"f88f46f6-1318-4598-93ce-9602f77f3892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.376694 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f88f46f6-1318-4598-93ce-9602f77f3892-secret-volume\") pod \"collect-profiles-29493270-rjrvz\" (UID: \"f88f46f6-1318-4598-93ce-9602f77f3892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.384804 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z7dq\" (UniqueName: \"kubernetes.io/projected/f88f46f6-1318-4598-93ce-9602f77f3892-kube-api-access-4z7dq\") pod \"collect-profiles-29493270-rjrvz\" (UID: \"f88f46f6-1318-4598-93ce-9602f77f3892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.481795 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz" Jan 28 10:30:00 crc kubenswrapper[4735]: I0128 10:30:00.946712 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz"] Jan 28 10:30:01 crc kubenswrapper[4735]: I0128 10:30:01.029910 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz" event={"ID":"f88f46f6-1318-4598-93ce-9602f77f3892","Type":"ContainerStarted","Data":"52aaa439534847eac9b70cdd271b22d77f0fb80e0c4501b9994f065a5bfa53da"} Jan 28 10:30:02 crc kubenswrapper[4735]: I0128 10:30:02.040922 4735 generic.go:334] "Generic (PLEG): container finished" podID="f88f46f6-1318-4598-93ce-9602f77f3892" containerID="cfc5e138279c2e90d4a2c29d4f2d08bb2c04d744b9a7684e2ca2bbf863cdde80" exitCode=0 Jan 28 10:30:02 crc kubenswrapper[4735]: I0128 10:30:02.041133 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz" event={"ID":"f88f46f6-1318-4598-93ce-9602f77f3892","Type":"ContainerDied","Data":"cfc5e138279c2e90d4a2c29d4f2d08bb2c04d744b9a7684e2ca2bbf863cdde80"} Jan 28 10:30:03 crc kubenswrapper[4735]: I0128 10:30:03.496295 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz" Jan 28 10:30:03 crc kubenswrapper[4735]: I0128 10:30:03.624505 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f88f46f6-1318-4598-93ce-9602f77f3892-secret-volume\") pod \"f88f46f6-1318-4598-93ce-9602f77f3892\" (UID: \"f88f46f6-1318-4598-93ce-9602f77f3892\") " Jan 28 10:30:03 crc kubenswrapper[4735]: I0128 10:30:03.624639 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f88f46f6-1318-4598-93ce-9602f77f3892-config-volume\") pod \"f88f46f6-1318-4598-93ce-9602f77f3892\" (UID: \"f88f46f6-1318-4598-93ce-9602f77f3892\") " Jan 28 10:30:03 crc kubenswrapper[4735]: I0128 10:30:03.624723 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z7dq\" (UniqueName: \"kubernetes.io/projected/f88f46f6-1318-4598-93ce-9602f77f3892-kube-api-access-4z7dq\") pod \"f88f46f6-1318-4598-93ce-9602f77f3892\" (UID: \"f88f46f6-1318-4598-93ce-9602f77f3892\") " Jan 28 10:30:03 crc kubenswrapper[4735]: I0128 10:30:03.627535 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f88f46f6-1318-4598-93ce-9602f77f3892-config-volume" (OuterVolumeSpecName: "config-volume") pod "f88f46f6-1318-4598-93ce-9602f77f3892" (UID: "f88f46f6-1318-4598-93ce-9602f77f3892"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:30:03 crc kubenswrapper[4735]: I0128 10:30:03.632989 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88f46f6-1318-4598-93ce-9602f77f3892-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f88f46f6-1318-4598-93ce-9602f77f3892" (UID: "f88f46f6-1318-4598-93ce-9602f77f3892"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:30:03 crc kubenswrapper[4735]: I0128 10:30:03.633029 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88f46f6-1318-4598-93ce-9602f77f3892-kube-api-access-4z7dq" (OuterVolumeSpecName: "kube-api-access-4z7dq") pod "f88f46f6-1318-4598-93ce-9602f77f3892" (UID: "f88f46f6-1318-4598-93ce-9602f77f3892"). InnerVolumeSpecName "kube-api-access-4z7dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:30:03 crc kubenswrapper[4735]: I0128 10:30:03.726387 4735 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f88f46f6-1318-4598-93ce-9602f77f3892-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 10:30:03 crc kubenswrapper[4735]: I0128 10:30:03.726422 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z7dq\" (UniqueName: \"kubernetes.io/projected/f88f46f6-1318-4598-93ce-9602f77f3892-kube-api-access-4z7dq\") on node \"crc\" DevicePath \"\"" Jan 28 10:30:03 crc kubenswrapper[4735]: I0128 10:30:03.726433 4735 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f88f46f6-1318-4598-93ce-9602f77f3892-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 10:30:04 crc kubenswrapper[4735]: I0128 10:30:04.080377 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz" event={"ID":"f88f46f6-1318-4598-93ce-9602f77f3892","Type":"ContainerDied","Data":"52aaa439534847eac9b70cdd271b22d77f0fb80e0c4501b9994f065a5bfa53da"} Jan 28 10:30:04 crc kubenswrapper[4735]: I0128 10:30:04.080801 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52aaa439534847eac9b70cdd271b22d77f0fb80e0c4501b9994f065a5bfa53da" Jan 28 10:30:04 crc kubenswrapper[4735]: I0128 10:30:04.080476 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493270-rjrvz" Jan 28 10:30:04 crc kubenswrapper[4735]: I0128 10:30:04.574094 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp"] Jan 28 10:30:04 crc kubenswrapper[4735]: I0128 10:30:04.584550 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493225-l7hlp"] Jan 28 10:30:05 crc kubenswrapper[4735]: I0128 10:30:05.430494 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:30:05 crc kubenswrapper[4735]: I0128 10:30:05.430574 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:30:05 crc kubenswrapper[4735]: I0128 10:30:05.517907 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83849342-c2fb-4a4e-a7cb-c94dc2c02b77" path="/var/lib/kubelet/pods/83849342-c2fb-4a4e-a7cb-c94dc2c02b77/volumes" Jan 28 10:30:35 crc kubenswrapper[4735]: I0128 10:30:35.430247 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:30:35 crc kubenswrapper[4735]: I0128 10:30:35.430894 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:30:51 crc kubenswrapper[4735]: I0128 10:30:51.738891 4735 scope.go:117] "RemoveContainer" containerID="3304399f014f57829b756999b8b4ab7ae44d5724705a25165e88bbb5944b58c3" Jan 28 10:30:59 crc kubenswrapper[4735]: I0128 10:30:59.129166 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-79b9899549-8qxf5" podUID="681a40ab-fe34-4825-8336-3b8048fc04b7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 28 10:31:05 crc kubenswrapper[4735]: I0128 10:31:05.429910 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:31:05 crc kubenswrapper[4735]: I0128 10:31:05.430504 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:31:05 crc kubenswrapper[4735]: I0128 10:31:05.430559 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 10:31:05 crc kubenswrapper[4735]: I0128 10:31:05.431320 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd"} pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 10:31:05 crc kubenswrapper[4735]: I0128 10:31:05.431373 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" containerID="cri-o://5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" gracePeriod=600 Jan 28 10:31:05 crc kubenswrapper[4735]: E0128 10:31:05.555034 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:31:05 crc kubenswrapper[4735]: I0128 10:31:05.647109 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" exitCode=0 Jan 28 10:31:05 crc kubenswrapper[4735]: I0128 10:31:05.647190 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerDied","Data":"5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd"} Jan 28 10:31:05 crc kubenswrapper[4735]: I0128 10:31:05.647254 4735 scope.go:117] "RemoveContainer" containerID="dbfa7576c396dfe90ddc7f77124fc640faab1a0d28cc7b69eb0aff2b00d1d8d0" Jan 28 10:31:05 crc kubenswrapper[4735]: I0128 10:31:05.647872 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:31:05 crc kubenswrapper[4735]: E0128 10:31:05.648112 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:31:20 crc kubenswrapper[4735]: I0128 10:31:20.502708 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:31:20 crc kubenswrapper[4735]: E0128 10:31:20.503534 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:31:35 crc kubenswrapper[4735]: I0128 10:31:35.502430 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:31:35 crc kubenswrapper[4735]: E0128 10:31:35.504988 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:31:49 crc kubenswrapper[4735]: I0128 10:31:49.503439 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:31:49 crc kubenswrapper[4735]: E0128 10:31:49.504138 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:32:03 crc kubenswrapper[4735]: I0128 10:32:03.503077 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:32:03 crc kubenswrapper[4735]: E0128 10:32:03.504281 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:32:16 crc kubenswrapper[4735]: I0128 10:32:16.504275 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:32:16 crc kubenswrapper[4735]: E0128 10:32:16.505517 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:32:27 crc kubenswrapper[4735]: I0128 10:32:27.513765 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:32:27 crc kubenswrapper[4735]: E0128 10:32:27.514804 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:32:41 crc kubenswrapper[4735]: I0128 10:32:41.507817 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:32:41 crc kubenswrapper[4735]: E0128 10:32:41.509173 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:32:52 crc kubenswrapper[4735]: I0128 10:32:52.502785 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:32:52 crc kubenswrapper[4735]: E0128 10:32:52.503675 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:33:05 crc kubenswrapper[4735]: I0128 10:33:05.502293 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:33:05 crc kubenswrapper[4735]: E0128 10:33:05.503137 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:33:17 crc kubenswrapper[4735]: I0128 10:33:17.509669 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:33:17 crc kubenswrapper[4735]: E0128 10:33:17.510497 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:33:31 crc kubenswrapper[4735]: I0128 10:33:31.503243 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:33:31 crc kubenswrapper[4735]: E0128 10:33:31.504149 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:33:43 crc kubenswrapper[4735]: I0128 10:33:43.503365 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:33:43 crc kubenswrapper[4735]: E0128 10:33:43.504191 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:33:54 crc kubenswrapper[4735]: I0128 10:33:54.502801 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:33:54 crc kubenswrapper[4735]: E0128 10:33:54.503635 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:34:09 crc kubenswrapper[4735]: I0128 10:34:09.503082 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:34:09 crc kubenswrapper[4735]: E0128 10:34:09.503993 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:34:24 crc kubenswrapper[4735]: I0128 10:34:24.503310 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:34:24 crc kubenswrapper[4735]: E0128 10:34:24.505061 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:34:36 crc kubenswrapper[4735]: I0128 10:34:36.503816 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:34:36 crc kubenswrapper[4735]: E0128 10:34:36.504703 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:34:48 crc kubenswrapper[4735]: I0128 10:34:48.505693 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:34:48 crc kubenswrapper[4735]: E0128 10:34:48.507188 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:34:59 crc kubenswrapper[4735]: I0128 10:34:59.503880 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:34:59 crc kubenswrapper[4735]: E0128 10:34:59.505103 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:35:13 crc kubenswrapper[4735]: I0128 10:35:13.502719 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:35:13 crc kubenswrapper[4735]: E0128 10:35:13.503775 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:35:28 crc kubenswrapper[4735]: I0128 10:35:28.503167 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:35:28 crc kubenswrapper[4735]: E0128 10:35:28.504144 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:35:40 crc kubenswrapper[4735]: I0128 10:35:40.502553 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:35:40 crc kubenswrapper[4735]: E0128 10:35:40.503556 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:35:52 crc kubenswrapper[4735]: I0128 10:35:52.502270 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:35:52 crc kubenswrapper[4735]: E0128 10:35:52.502993 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:36:07 crc kubenswrapper[4735]: I0128 10:36:07.515319 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:36:08 crc kubenswrapper[4735]: I0128 10:36:08.585988 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"9b8984715e78afe1b7f7f353614e464bb80994a6fa2fba665bf31de6a5c08139"} Jan 28 10:36:19 crc kubenswrapper[4735]: I0128 10:36:19.201506 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w65vh"] Jan 28 10:36:19 crc kubenswrapper[4735]: E0128 10:36:19.202660 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88f46f6-1318-4598-93ce-9602f77f3892" containerName="collect-profiles" Jan 28 10:36:19 crc kubenswrapper[4735]: I0128 10:36:19.202682 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88f46f6-1318-4598-93ce-9602f77f3892" containerName="collect-profiles" Jan 28 10:36:19 crc kubenswrapper[4735]: I0128 10:36:19.202968 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88f46f6-1318-4598-93ce-9602f77f3892" containerName="collect-profiles" Jan 28 10:36:19 crc kubenswrapper[4735]: I0128 10:36:19.204825 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w65vh" Jan 28 10:36:19 crc kubenswrapper[4735]: I0128 10:36:19.213523 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w65vh"] Jan 28 10:36:19 crc kubenswrapper[4735]: I0128 10:36:19.393796 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzsz2\" (UniqueName: \"kubernetes.io/projected/25a6075b-7144-4c4c-8534-49a80b8f9c9e-kube-api-access-mzsz2\") pod \"certified-operators-w65vh\" (UID: \"25a6075b-7144-4c4c-8534-49a80b8f9c9e\") " pod="openshift-marketplace/certified-operators-w65vh" Jan 28 10:36:19 crc kubenswrapper[4735]: I0128 10:36:19.393922 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a6075b-7144-4c4c-8534-49a80b8f9c9e-utilities\") pod \"certified-operators-w65vh\" (UID: \"25a6075b-7144-4c4c-8534-49a80b8f9c9e\") " pod="openshift-marketplace/certified-operators-w65vh" Jan 28 10:36:19 crc kubenswrapper[4735]: I0128 10:36:19.393954 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a6075b-7144-4c4c-8534-49a80b8f9c9e-catalog-content\") pod \"certified-operators-w65vh\" (UID: \"25a6075b-7144-4c4c-8534-49a80b8f9c9e\") " pod="openshift-marketplace/certified-operators-w65vh" Jan 28 10:36:19 crc kubenswrapper[4735]: I0128 10:36:19.496312 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a6075b-7144-4c4c-8534-49a80b8f9c9e-utilities\") pod \"certified-operators-w65vh\" (UID: \"25a6075b-7144-4c4c-8534-49a80b8f9c9e\") " pod="openshift-marketplace/certified-operators-w65vh" Jan 28 10:36:19 crc kubenswrapper[4735]: I0128 10:36:19.496407 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a6075b-7144-4c4c-8534-49a80b8f9c9e-catalog-content\") pod \"certified-operators-w65vh\" (UID: \"25a6075b-7144-4c4c-8534-49a80b8f9c9e\") " pod="openshift-marketplace/certified-operators-w65vh" Jan 28 10:36:19 crc kubenswrapper[4735]: I0128 10:36:19.496532 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzsz2\" (UniqueName: \"kubernetes.io/projected/25a6075b-7144-4c4c-8534-49a80b8f9c9e-kube-api-access-mzsz2\") pod \"certified-operators-w65vh\" (UID: \"25a6075b-7144-4c4c-8534-49a80b8f9c9e\") " pod="openshift-marketplace/certified-operators-w65vh" Jan 28 10:36:19 crc kubenswrapper[4735]: I0128 10:36:19.496890 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a6075b-7144-4c4c-8534-49a80b8f9c9e-utilities\") pod \"certified-operators-w65vh\" (UID: \"25a6075b-7144-4c4c-8534-49a80b8f9c9e\") " pod="openshift-marketplace/certified-operators-w65vh" Jan 28 10:36:19 crc kubenswrapper[4735]: I0128 10:36:19.496930 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a6075b-7144-4c4c-8534-49a80b8f9c9e-catalog-content\") pod \"certified-operators-w65vh\" (UID: \"25a6075b-7144-4c4c-8534-49a80b8f9c9e\") " pod="openshift-marketplace/certified-operators-w65vh" Jan 28 10:36:19 crc kubenswrapper[4735]: I0128 10:36:19.521243 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzsz2\" (UniqueName: \"kubernetes.io/projected/25a6075b-7144-4c4c-8534-49a80b8f9c9e-kube-api-access-mzsz2\") pod \"certified-operators-w65vh\" (UID: \"25a6075b-7144-4c4c-8534-49a80b8f9c9e\") " pod="openshift-marketplace/certified-operators-w65vh" Jan 28 10:36:19 crc kubenswrapper[4735]: I0128 10:36:19.530159 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w65vh" Jan 28 10:36:20 crc kubenswrapper[4735]: I0128 10:36:20.066512 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w65vh"] Jan 28 10:36:20 crc kubenswrapper[4735]: I0128 10:36:20.695415 4735 generic.go:334] "Generic (PLEG): container finished" podID="25a6075b-7144-4c4c-8534-49a80b8f9c9e" containerID="811ea1d07824f85e127f609fbce9efeb6603417128896a24cf6cde43e7e77303" exitCode=0 Jan 28 10:36:20 crc kubenswrapper[4735]: I0128 10:36:20.695678 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w65vh" event={"ID":"25a6075b-7144-4c4c-8534-49a80b8f9c9e","Type":"ContainerDied","Data":"811ea1d07824f85e127f609fbce9efeb6603417128896a24cf6cde43e7e77303"} Jan 28 10:36:20 crc kubenswrapper[4735]: I0128 10:36:20.695702 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w65vh" event={"ID":"25a6075b-7144-4c4c-8534-49a80b8f9c9e","Type":"ContainerStarted","Data":"2ffd7b38fa69253ed8b2e51f01a1a656776383ac9ae4acc83339588278ab68cd"} Jan 28 10:36:20 crc kubenswrapper[4735]: I0128 10:36:20.697624 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 10:36:22 crc kubenswrapper[4735]: I0128 10:36:22.720145 4735 generic.go:334] "Generic (PLEG): container finished" podID="25a6075b-7144-4c4c-8534-49a80b8f9c9e" containerID="3466b90b1764779d22de5480bcc16104921e2397d9465388a6b12f2ce347474d" exitCode=0 Jan 28 10:36:22 crc kubenswrapper[4735]: I0128 10:36:22.720239 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w65vh" event={"ID":"25a6075b-7144-4c4c-8534-49a80b8f9c9e","Type":"ContainerDied","Data":"3466b90b1764779d22de5480bcc16104921e2397d9465388a6b12f2ce347474d"} Jan 28 10:36:23 crc kubenswrapper[4735]: I0128 10:36:23.741335 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w65vh" event={"ID":"25a6075b-7144-4c4c-8534-49a80b8f9c9e","Type":"ContainerStarted","Data":"c3b03994aa97cf438545c0a928dcaf24753b8e7841a0163dc01d5ac551a25bf2"} Jan 28 10:36:23 crc kubenswrapper[4735]: I0128 10:36:23.771676 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w65vh" podStartSLOduration=2.317887208 podStartE2EDuration="4.77165489s" podCreationTimestamp="2026-01-28 10:36:19 +0000 UTC" firstStartedPulling="2026-01-28 10:36:20.697421054 +0000 UTC m=+3233.847085427" lastFinishedPulling="2026-01-28 10:36:23.151188736 +0000 UTC m=+3236.300853109" observedRunningTime="2026-01-28 10:36:23.757288515 +0000 UTC m=+3236.906952918" watchObservedRunningTime="2026-01-28 10:36:23.77165489 +0000 UTC m=+3236.921319273" Jan 28 10:36:29 crc kubenswrapper[4735]: I0128 10:36:29.531179 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w65vh" Jan 28 10:36:29 crc kubenswrapper[4735]: I0128 10:36:29.531924 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w65vh" Jan 28 10:36:29 crc kubenswrapper[4735]: I0128 10:36:29.610498 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w65vh" Jan 28 10:36:29 crc kubenswrapper[4735]: I0128 10:36:29.844312 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w65vh" Jan 28 10:36:29 crc kubenswrapper[4735]: I0128 10:36:29.896089 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w65vh"] Jan 28 10:36:31 crc kubenswrapper[4735]: I0128 10:36:31.808321 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w65vh" podUID="25a6075b-7144-4c4c-8534-49a80b8f9c9e" containerName="registry-server" containerID="cri-o://c3b03994aa97cf438545c0a928dcaf24753b8e7841a0163dc01d5ac551a25bf2" gracePeriod=2 Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.408609 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w65vh" Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.467297 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a6075b-7144-4c4c-8534-49a80b8f9c9e-catalog-content\") pod \"25a6075b-7144-4c4c-8534-49a80b8f9c9e\" (UID: \"25a6075b-7144-4c4c-8534-49a80b8f9c9e\") " Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.467525 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzsz2\" (UniqueName: \"kubernetes.io/projected/25a6075b-7144-4c4c-8534-49a80b8f9c9e-kube-api-access-mzsz2\") pod \"25a6075b-7144-4c4c-8534-49a80b8f9c9e\" (UID: \"25a6075b-7144-4c4c-8534-49a80b8f9c9e\") " Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.467576 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a6075b-7144-4c4c-8534-49a80b8f9c9e-utilities\") pod \"25a6075b-7144-4c4c-8534-49a80b8f9c9e\" (UID: \"25a6075b-7144-4c4c-8534-49a80b8f9c9e\") " Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.468641 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25a6075b-7144-4c4c-8534-49a80b8f9c9e-utilities" (OuterVolumeSpecName: "utilities") pod "25a6075b-7144-4c4c-8534-49a80b8f9c9e" (UID: "25a6075b-7144-4c4c-8534-49a80b8f9c9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.477714 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25a6075b-7144-4c4c-8534-49a80b8f9c9e-kube-api-access-mzsz2" (OuterVolumeSpecName: "kube-api-access-mzsz2") pod "25a6075b-7144-4c4c-8534-49a80b8f9c9e" (UID: "25a6075b-7144-4c4c-8534-49a80b8f9c9e"). InnerVolumeSpecName "kube-api-access-mzsz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.525720 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25a6075b-7144-4c4c-8534-49a80b8f9c9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25a6075b-7144-4c4c-8534-49a80b8f9c9e" (UID: "25a6075b-7144-4c4c-8534-49a80b8f9c9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.568887 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzsz2\" (UniqueName: \"kubernetes.io/projected/25a6075b-7144-4c4c-8534-49a80b8f9c9e-kube-api-access-mzsz2\") on node \"crc\" DevicePath \"\"" Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.568917 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a6075b-7144-4c4c-8534-49a80b8f9c9e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.568925 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a6075b-7144-4c4c-8534-49a80b8f9c9e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.818660 4735 generic.go:334] "Generic (PLEG): container finished" podID="25a6075b-7144-4c4c-8534-49a80b8f9c9e" containerID="c3b03994aa97cf438545c0a928dcaf24753b8e7841a0163dc01d5ac551a25bf2" exitCode=0 Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.818799 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w65vh" event={"ID":"25a6075b-7144-4c4c-8534-49a80b8f9c9e","Type":"ContainerDied","Data":"c3b03994aa97cf438545c0a928dcaf24753b8e7841a0163dc01d5ac551a25bf2"} Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.818964 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w65vh" event={"ID":"25a6075b-7144-4c4c-8534-49a80b8f9c9e","Type":"ContainerDied","Data":"2ffd7b38fa69253ed8b2e51f01a1a656776383ac9ae4acc83339588278ab68cd"} Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.818991 4735 scope.go:117] "RemoveContainer" containerID="c3b03994aa97cf438545c0a928dcaf24753b8e7841a0163dc01d5ac551a25bf2" Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.818854 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w65vh" Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.853930 4735 scope.go:117] "RemoveContainer" containerID="3466b90b1764779d22de5480bcc16104921e2397d9465388a6b12f2ce347474d" Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.857097 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w65vh"] Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.866821 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w65vh"] Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.876345 4735 scope.go:117] "RemoveContainer" containerID="811ea1d07824f85e127f609fbce9efeb6603417128896a24cf6cde43e7e77303" Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.936398 4735 scope.go:117] "RemoveContainer" containerID="c3b03994aa97cf438545c0a928dcaf24753b8e7841a0163dc01d5ac551a25bf2" Jan 28 10:36:32 crc kubenswrapper[4735]: E0128 10:36:32.936949 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3b03994aa97cf438545c0a928dcaf24753b8e7841a0163dc01d5ac551a25bf2\": container with ID starting with c3b03994aa97cf438545c0a928dcaf24753b8e7841a0163dc01d5ac551a25bf2 not found: ID does not exist" containerID="c3b03994aa97cf438545c0a928dcaf24753b8e7841a0163dc01d5ac551a25bf2" Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.937000 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3b03994aa97cf438545c0a928dcaf24753b8e7841a0163dc01d5ac551a25bf2"} err="failed to get container status \"c3b03994aa97cf438545c0a928dcaf24753b8e7841a0163dc01d5ac551a25bf2\": rpc error: code = NotFound desc = could not find container \"c3b03994aa97cf438545c0a928dcaf24753b8e7841a0163dc01d5ac551a25bf2\": container with ID starting with c3b03994aa97cf438545c0a928dcaf24753b8e7841a0163dc01d5ac551a25bf2 not found: ID does not exist" Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.937031 4735 scope.go:117] "RemoveContainer" containerID="3466b90b1764779d22de5480bcc16104921e2397d9465388a6b12f2ce347474d" Jan 28 10:36:32 crc kubenswrapper[4735]: E0128 10:36:32.937548 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3466b90b1764779d22de5480bcc16104921e2397d9465388a6b12f2ce347474d\": container with ID starting with 3466b90b1764779d22de5480bcc16104921e2397d9465388a6b12f2ce347474d not found: ID does not exist" containerID="3466b90b1764779d22de5480bcc16104921e2397d9465388a6b12f2ce347474d" Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.937575 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3466b90b1764779d22de5480bcc16104921e2397d9465388a6b12f2ce347474d"} err="failed to get container status \"3466b90b1764779d22de5480bcc16104921e2397d9465388a6b12f2ce347474d\": rpc error: code = NotFound desc = could not find container \"3466b90b1764779d22de5480bcc16104921e2397d9465388a6b12f2ce347474d\": container with ID starting with 3466b90b1764779d22de5480bcc16104921e2397d9465388a6b12f2ce347474d not found: ID does not exist" Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.937594 4735 scope.go:117] "RemoveContainer" containerID="811ea1d07824f85e127f609fbce9efeb6603417128896a24cf6cde43e7e77303" Jan 28 10:36:32 crc kubenswrapper[4735]: E0128 10:36:32.937941 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"811ea1d07824f85e127f609fbce9efeb6603417128896a24cf6cde43e7e77303\": container with ID starting with 811ea1d07824f85e127f609fbce9efeb6603417128896a24cf6cde43e7e77303 not found: ID does not exist" containerID="811ea1d07824f85e127f609fbce9efeb6603417128896a24cf6cde43e7e77303" Jan 28 10:36:32 crc kubenswrapper[4735]: I0128 10:36:32.937976 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"811ea1d07824f85e127f609fbce9efeb6603417128896a24cf6cde43e7e77303"} err="failed to get container status \"811ea1d07824f85e127f609fbce9efeb6603417128896a24cf6cde43e7e77303\": rpc error: code = NotFound desc = could not find container \"811ea1d07824f85e127f609fbce9efeb6603417128896a24cf6cde43e7e77303\": container with ID starting with 811ea1d07824f85e127f609fbce9efeb6603417128896a24cf6cde43e7e77303 not found: ID does not exist" Jan 28 10:36:33 crc kubenswrapper[4735]: I0128 10:36:33.524450 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25a6075b-7144-4c4c-8534-49a80b8f9c9e" path="/var/lib/kubelet/pods/25a6075b-7144-4c4c-8534-49a80b8f9c9e/volumes" Jan 28 10:38:35 crc kubenswrapper[4735]: I0128 10:38:35.430033 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:38:35 crc kubenswrapper[4735]: I0128 10:38:35.430532 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:39:05 crc kubenswrapper[4735]: I0128 10:39:05.430830 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:39:05 crc kubenswrapper[4735]: I0128 10:39:05.433219 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:39:35 crc kubenswrapper[4735]: I0128 10:39:35.430058 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:39:35 crc kubenswrapper[4735]: I0128 10:39:35.430681 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:39:35 crc kubenswrapper[4735]: I0128 10:39:35.430731 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 10:39:35 crc kubenswrapper[4735]: I0128 10:39:35.431558 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b8984715e78afe1b7f7f353614e464bb80994a6fa2fba665bf31de6a5c08139"} pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 10:39:35 crc kubenswrapper[4735]: I0128 10:39:35.431619 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" containerID="cri-o://9b8984715e78afe1b7f7f353614e464bb80994a6fa2fba665bf31de6a5c08139" gracePeriod=600 Jan 28 10:39:35 crc kubenswrapper[4735]: I0128 10:39:35.640066 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerID="9b8984715e78afe1b7f7f353614e464bb80994a6fa2fba665bf31de6a5c08139" exitCode=0 Jan 28 10:39:35 crc kubenswrapper[4735]: I0128 10:39:35.640212 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerDied","Data":"9b8984715e78afe1b7f7f353614e464bb80994a6fa2fba665bf31de6a5c08139"} Jan 28 10:39:35 crc kubenswrapper[4735]: I0128 10:39:35.640456 4735 scope.go:117] "RemoveContainer" containerID="5130a169c882d22b0de1af383d3fef637de742eb0727e28d1d9a66cf0dd736fd" Jan 28 10:39:36 crc kubenswrapper[4735]: I0128 10:39:36.653780 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d"} Jan 28 10:39:56 crc kubenswrapper[4735]: I0128 10:39:56.763160 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qtqt8"] Jan 28 10:39:56 crc kubenswrapper[4735]: E0128 10:39:56.764156 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a6075b-7144-4c4c-8534-49a80b8f9c9e" containerName="registry-server" Jan 28 10:39:56 crc kubenswrapper[4735]: I0128 10:39:56.764173 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a6075b-7144-4c4c-8534-49a80b8f9c9e" containerName="registry-server" Jan 28 10:39:56 crc kubenswrapper[4735]: E0128 10:39:56.764204 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a6075b-7144-4c4c-8534-49a80b8f9c9e" containerName="extract-utilities" Jan 28 10:39:56 crc kubenswrapper[4735]: I0128 10:39:56.764212 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a6075b-7144-4c4c-8534-49a80b8f9c9e" containerName="extract-utilities" Jan 28 10:39:56 crc kubenswrapper[4735]: E0128 10:39:56.764225 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a6075b-7144-4c4c-8534-49a80b8f9c9e" containerName="extract-content" Jan 28 10:39:56 crc kubenswrapper[4735]: I0128 10:39:56.764233 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a6075b-7144-4c4c-8534-49a80b8f9c9e" containerName="extract-content" Jan 28 10:39:56 crc kubenswrapper[4735]: I0128 10:39:56.764459 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="25a6075b-7144-4c4c-8534-49a80b8f9c9e" containerName="registry-server" Jan 28 10:39:56 crc kubenswrapper[4735]: I0128 10:39:56.765744 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qtqt8" Jan 28 10:39:56 crc kubenswrapper[4735]: I0128 10:39:56.790292 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qtqt8"] Jan 28 10:39:56 crc kubenswrapper[4735]: I0128 10:39:56.850257 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvj8j\" (UniqueName: \"kubernetes.io/projected/f37972be-d81f-41bc-8378-c68753e6494f-kube-api-access-fvj8j\") pod \"redhat-marketplace-qtqt8\" (UID: \"f37972be-d81f-41bc-8378-c68753e6494f\") " pod="openshift-marketplace/redhat-marketplace-qtqt8" Jan 28 10:39:56 crc kubenswrapper[4735]: I0128 10:39:56.850643 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37972be-d81f-41bc-8378-c68753e6494f-catalog-content\") pod \"redhat-marketplace-qtqt8\" (UID: \"f37972be-d81f-41bc-8378-c68753e6494f\") " pod="openshift-marketplace/redhat-marketplace-qtqt8" Jan 28 10:39:56 crc kubenswrapper[4735]: I0128 10:39:56.850778 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37972be-d81f-41bc-8378-c68753e6494f-utilities\") pod \"redhat-marketplace-qtqt8\" (UID: \"f37972be-d81f-41bc-8378-c68753e6494f\") " pod="openshift-marketplace/redhat-marketplace-qtqt8" Jan 28 10:39:56 crc kubenswrapper[4735]: I0128 10:39:56.952569 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37972be-d81f-41bc-8378-c68753e6494f-utilities\") pod \"redhat-marketplace-qtqt8\" (UID: \"f37972be-d81f-41bc-8378-c68753e6494f\") " pod="openshift-marketplace/redhat-marketplace-qtqt8" Jan 28 10:39:56 crc kubenswrapper[4735]: I0128 10:39:56.952676 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvj8j\" (UniqueName: \"kubernetes.io/projected/f37972be-d81f-41bc-8378-c68753e6494f-kube-api-access-fvj8j\") pod \"redhat-marketplace-qtqt8\" (UID: \"f37972be-d81f-41bc-8378-c68753e6494f\") " pod="openshift-marketplace/redhat-marketplace-qtqt8" Jan 28 10:39:56 crc kubenswrapper[4735]: I0128 10:39:56.952734 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37972be-d81f-41bc-8378-c68753e6494f-catalog-content\") pod \"redhat-marketplace-qtqt8\" (UID: \"f37972be-d81f-41bc-8378-c68753e6494f\") " pod="openshift-marketplace/redhat-marketplace-qtqt8" Jan 28 10:39:56 crc kubenswrapper[4735]: I0128 10:39:56.953294 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37972be-d81f-41bc-8378-c68753e6494f-utilities\") pod \"redhat-marketplace-qtqt8\" (UID: \"f37972be-d81f-41bc-8378-c68753e6494f\") " pod="openshift-marketplace/redhat-marketplace-qtqt8" Jan 28 10:39:56 crc kubenswrapper[4735]: I0128 10:39:56.953326 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37972be-d81f-41bc-8378-c68753e6494f-catalog-content\") pod \"redhat-marketplace-qtqt8\" (UID: \"f37972be-d81f-41bc-8378-c68753e6494f\") " pod="openshift-marketplace/redhat-marketplace-qtqt8" Jan 28 10:39:56 crc kubenswrapper[4735]: I0128 10:39:56.974483 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvj8j\" (UniqueName: \"kubernetes.io/projected/f37972be-d81f-41bc-8378-c68753e6494f-kube-api-access-fvj8j\") pod \"redhat-marketplace-qtqt8\" (UID: \"f37972be-d81f-41bc-8378-c68753e6494f\") " pod="openshift-marketplace/redhat-marketplace-qtqt8" Jan 28 10:39:57 crc kubenswrapper[4735]: I0128 10:39:57.087642 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qtqt8" Jan 28 10:39:57 crc kubenswrapper[4735]: I0128 10:39:57.623877 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qtqt8"] Jan 28 10:39:57 crc kubenswrapper[4735]: I0128 10:39:57.864136 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtqt8" event={"ID":"f37972be-d81f-41bc-8378-c68753e6494f","Type":"ContainerStarted","Data":"c5d11ff3b96f1b4eec11fc76e2fa9cb6530205196ae024af0818971cb34c5e69"} Jan 28 10:39:57 crc kubenswrapper[4735]: I0128 10:39:57.864437 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtqt8" event={"ID":"f37972be-d81f-41bc-8378-c68753e6494f","Type":"ContainerStarted","Data":"2e09506d0ba7534f51b628cefe68023f19eb1f0ea4c4c9d59cd52c9bca2e50c3"} Jan 28 10:39:57 crc kubenswrapper[4735]: I0128 10:39:57.868913 4735 generic.go:334] "Generic (PLEG): container finished" podID="d5faba03-f0db-452a-bf0c-271d930e27b5" containerID="8462a6dffa0441a2f47ecf732bfe7d3ffb2dd9799a5816e068dd6475a9945a14" exitCode=0 Jan 28 10:39:57 crc kubenswrapper[4735]: I0128 10:39:57.868960 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d5faba03-f0db-452a-bf0c-271d930e27b5","Type":"ContainerDied","Data":"8462a6dffa0441a2f47ecf732bfe7d3ffb2dd9799a5816e068dd6475a9945a14"} Jan 28 10:39:58 crc kubenswrapper[4735]: I0128 10:39:58.884106 4735 generic.go:334] "Generic (PLEG): container finished" podID="f37972be-d81f-41bc-8378-c68753e6494f" containerID="c5d11ff3b96f1b4eec11fc76e2fa9cb6530205196ae024af0818971cb34c5e69" exitCode=0 Jan 28 10:39:58 crc kubenswrapper[4735]: I0128 10:39:58.884608 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtqt8" event={"ID":"f37972be-d81f-41bc-8378-c68753e6494f","Type":"ContainerDied","Data":"c5d11ff3b96f1b4eec11fc76e2fa9cb6530205196ae024af0818971cb34c5e69"} Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.274793 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.298558 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5faba03-f0db-452a-bf0c-271d930e27b5-openstack-config\") pod \"d5faba03-f0db-452a-bf0c-271d930e27b5\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.298683 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg256\" (UniqueName: \"kubernetes.io/projected/d5faba03-f0db-452a-bf0c-271d930e27b5-kube-api-access-bg256\") pod \"d5faba03-f0db-452a-bf0c-271d930e27b5\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.298777 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-ca-certs\") pod \"d5faba03-f0db-452a-bf0c-271d930e27b5\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.298812 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-openstack-config-secret\") pod \"d5faba03-f0db-452a-bf0c-271d930e27b5\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.298853 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d5faba03-f0db-452a-bf0c-271d930e27b5-test-operator-ephemeral-temporary\") pod \"d5faba03-f0db-452a-bf0c-271d930e27b5\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.298887 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d5faba03-f0db-452a-bf0c-271d930e27b5-test-operator-ephemeral-workdir\") pod \"d5faba03-f0db-452a-bf0c-271d930e27b5\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.299038 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-ssh-key\") pod \"d5faba03-f0db-452a-bf0c-271d930e27b5\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.299120 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5faba03-f0db-452a-bf0c-271d930e27b5-config-data\") pod \"d5faba03-f0db-452a-bf0c-271d930e27b5\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.299216 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"d5faba03-f0db-452a-bf0c-271d930e27b5\" (UID: \"d5faba03-f0db-452a-bf0c-271d930e27b5\") " Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.300887 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5faba03-f0db-452a-bf0c-271d930e27b5-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "d5faba03-f0db-452a-bf0c-271d930e27b5" (UID: "d5faba03-f0db-452a-bf0c-271d930e27b5"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.306061 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5faba03-f0db-452a-bf0c-271d930e27b5-config-data" (OuterVolumeSpecName: "config-data") pod "d5faba03-f0db-452a-bf0c-271d930e27b5" (UID: "d5faba03-f0db-452a-bf0c-271d930e27b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.306542 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "d5faba03-f0db-452a-bf0c-271d930e27b5" (UID: "d5faba03-f0db-452a-bf0c-271d930e27b5"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.309810 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5faba03-f0db-452a-bf0c-271d930e27b5-kube-api-access-bg256" (OuterVolumeSpecName: "kube-api-access-bg256") pod "d5faba03-f0db-452a-bf0c-271d930e27b5" (UID: "d5faba03-f0db-452a-bf0c-271d930e27b5"). InnerVolumeSpecName "kube-api-access-bg256". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.333828 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5faba03-f0db-452a-bf0c-271d930e27b5-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "d5faba03-f0db-452a-bf0c-271d930e27b5" (UID: "d5faba03-f0db-452a-bf0c-271d930e27b5"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.340418 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "d5faba03-f0db-452a-bf0c-271d930e27b5" (UID: "d5faba03-f0db-452a-bf0c-271d930e27b5"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.343371 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d5faba03-f0db-452a-bf0c-271d930e27b5" (UID: "d5faba03-f0db-452a-bf0c-271d930e27b5"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.364194 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5faba03-f0db-452a-bf0c-271d930e27b5-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d5faba03-f0db-452a-bf0c-271d930e27b5" (UID: "d5faba03-f0db-452a-bf0c-271d930e27b5"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.365823 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d5faba03-f0db-452a-bf0c-271d930e27b5" (UID: "d5faba03-f0db-452a-bf0c-271d930e27b5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.401672 4735 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.401705 4735 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5faba03-f0db-452a-bf0c-271d930e27b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.401739 4735 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.401760 4735 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5faba03-f0db-452a-bf0c-271d930e27b5-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.401772 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg256\" (UniqueName: \"kubernetes.io/projected/d5faba03-f0db-452a-bf0c-271d930e27b5-kube-api-access-bg256\") on node \"crc\" DevicePath \"\"" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.401783 4735 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.401792 4735 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5faba03-f0db-452a-bf0c-271d930e27b5-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.401802 4735 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d5faba03-f0db-452a-bf0c-271d930e27b5-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.401812 4735 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d5faba03-f0db-452a-bf0c-271d930e27b5-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.430617 4735 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.503535 4735 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.895050 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d5faba03-f0db-452a-bf0c-271d930e27b5","Type":"ContainerDied","Data":"d1e5efc4f706606801760e6c19a1022bb45e279044b0761b0c6d7905d1e378f0"} Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.895095 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1e5efc4f706606801760e6c19a1022bb45e279044b0761b0c6d7905d1e378f0" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.895133 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.897916 4735 generic.go:334] "Generic (PLEG): container finished" podID="f37972be-d81f-41bc-8378-c68753e6494f" containerID="21f5e8e221fe2b155c28a7c4a00ee56eec4a8ee847e5cfbbfd5a413472a753d5" exitCode=0 Jan 28 10:39:59 crc kubenswrapper[4735]: I0128 10:39:59.898002 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtqt8" event={"ID":"f37972be-d81f-41bc-8378-c68753e6494f","Type":"ContainerDied","Data":"21f5e8e221fe2b155c28a7c4a00ee56eec4a8ee847e5cfbbfd5a413472a753d5"} Jan 28 10:40:01 crc kubenswrapper[4735]: I0128 10:40:01.919660 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtqt8" event={"ID":"f37972be-d81f-41bc-8378-c68753e6494f","Type":"ContainerStarted","Data":"0f6837f1a0730fe60cf493ab1c0a36538ddac611f3a190c8d306472aa8df9a56"} Jan 28 10:40:01 crc kubenswrapper[4735]: I0128 10:40:01.942486 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qtqt8" podStartSLOduration=2.442798159 podStartE2EDuration="5.942460496s" podCreationTimestamp="2026-01-28 10:39:56 +0000 UTC" firstStartedPulling="2026-01-28 10:39:57.866494078 +0000 UTC m=+3451.016158451" lastFinishedPulling="2026-01-28 10:40:01.366156375 +0000 UTC m=+3454.515820788" observedRunningTime="2026-01-28 10:40:01.93817469 +0000 UTC m=+3455.087839083" watchObservedRunningTime="2026-01-28 10:40:01.942460496 +0000 UTC m=+3455.092124879" Jan 28 10:40:03 crc kubenswrapper[4735]: I0128 10:40:03.408360 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gg9xm"] Jan 28 10:40:03 crc kubenswrapper[4735]: E0128 10:40:03.414159 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5faba03-f0db-452a-bf0c-271d930e27b5" containerName="tempest-tests-tempest-tests-runner" Jan 28 10:40:03 crc kubenswrapper[4735]: I0128 10:40:03.414335 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5faba03-f0db-452a-bf0c-271d930e27b5" containerName="tempest-tests-tempest-tests-runner" Jan 28 10:40:03 crc kubenswrapper[4735]: I0128 10:40:03.415725 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5faba03-f0db-452a-bf0c-271d930e27b5" containerName="tempest-tests-tempest-tests-runner" Jan 28 10:40:03 crc kubenswrapper[4735]: I0128 10:40:03.421966 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gg9xm" Jan 28 10:40:03 crc kubenswrapper[4735]: I0128 10:40:03.433643 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gg9xm"] Jan 28 10:40:03 crc kubenswrapper[4735]: I0128 10:40:03.488237 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-catalog-content\") pod \"community-operators-gg9xm\" (UID: \"2c105edf-34bc-44b7-a9a3-fc0406e7e19e\") " pod="openshift-marketplace/community-operators-gg9xm" Jan 28 10:40:03 crc kubenswrapper[4735]: I0128 10:40:03.488337 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-utilities\") pod \"community-operators-gg9xm\" (UID: \"2c105edf-34bc-44b7-a9a3-fc0406e7e19e\") " pod="openshift-marketplace/community-operators-gg9xm" Jan 28 10:40:03 crc kubenswrapper[4735]: I0128 10:40:03.488603 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgw62\" (UniqueName: \"kubernetes.io/projected/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-kube-api-access-xgw62\") pod \"community-operators-gg9xm\" (UID: \"2c105edf-34bc-44b7-a9a3-fc0406e7e19e\") " pod="openshift-marketplace/community-operators-gg9xm" Jan 28 10:40:03 crc kubenswrapper[4735]: I0128 10:40:03.590371 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-utilities\") pod \"community-operators-gg9xm\" (UID: \"2c105edf-34bc-44b7-a9a3-fc0406e7e19e\") " pod="openshift-marketplace/community-operators-gg9xm" Jan 28 10:40:03 crc kubenswrapper[4735]: I0128 10:40:03.590933 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-utilities\") pod \"community-operators-gg9xm\" (UID: \"2c105edf-34bc-44b7-a9a3-fc0406e7e19e\") " pod="openshift-marketplace/community-operators-gg9xm" Jan 28 10:40:03 crc kubenswrapper[4735]: I0128 10:40:03.590995 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgw62\" (UniqueName: \"kubernetes.io/projected/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-kube-api-access-xgw62\") pod \"community-operators-gg9xm\" (UID: \"2c105edf-34bc-44b7-a9a3-fc0406e7e19e\") " pod="openshift-marketplace/community-operators-gg9xm" Jan 28 10:40:03 crc kubenswrapper[4735]: I0128 10:40:03.591078 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-catalog-content\") pod \"community-operators-gg9xm\" (UID: \"2c105edf-34bc-44b7-a9a3-fc0406e7e19e\") " pod="openshift-marketplace/community-operators-gg9xm" Jan 28 10:40:03 crc kubenswrapper[4735]: I0128 10:40:03.591550 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-catalog-content\") pod \"community-operators-gg9xm\" (UID: \"2c105edf-34bc-44b7-a9a3-fc0406e7e19e\") " pod="openshift-marketplace/community-operators-gg9xm" Jan 28 10:40:03 crc kubenswrapper[4735]: I0128 10:40:03.612939 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgw62\" (UniqueName: \"kubernetes.io/projected/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-kube-api-access-xgw62\") pod \"community-operators-gg9xm\" (UID: \"2c105edf-34bc-44b7-a9a3-fc0406e7e19e\") " pod="openshift-marketplace/community-operators-gg9xm" Jan 28 10:40:03 crc kubenswrapper[4735]: I0128 10:40:03.764968 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gg9xm" Jan 28 10:40:04 crc kubenswrapper[4735]: I0128 10:40:04.261089 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gg9xm"] Jan 28 10:40:04 crc kubenswrapper[4735]: W0128 10:40:04.267129 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c105edf_34bc_44b7_a9a3_fc0406e7e19e.slice/crio-95fdf71edeed44a621826066eab847833cb9964dee252f015f430a1cc37128fd WatchSource:0}: Error finding container 95fdf71edeed44a621826066eab847833cb9964dee252f015f430a1cc37128fd: Status 404 returned error can't find the container with id 95fdf71edeed44a621826066eab847833cb9964dee252f015f430a1cc37128fd Jan 28 10:40:04 crc kubenswrapper[4735]: I0128 10:40:04.957942 4735 generic.go:334] "Generic (PLEG): container finished" podID="2c105edf-34bc-44b7-a9a3-fc0406e7e19e" containerID="12e43ac93b87c8f708d8243368c8fe21b6f34061b33c9ab7dc4dfd13555b2842" exitCode=0 Jan 28 10:40:04 crc kubenswrapper[4735]: I0128 10:40:04.958080 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gg9xm" event={"ID":"2c105edf-34bc-44b7-a9a3-fc0406e7e19e","Type":"ContainerDied","Data":"12e43ac93b87c8f708d8243368c8fe21b6f34061b33c9ab7dc4dfd13555b2842"} Jan 28 10:40:04 crc kubenswrapper[4735]: I0128 10:40:04.958502 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gg9xm" event={"ID":"2c105edf-34bc-44b7-a9a3-fc0406e7e19e","Type":"ContainerStarted","Data":"95fdf71edeed44a621826066eab847833cb9964dee252f015f430a1cc37128fd"} Jan 28 10:40:05 crc kubenswrapper[4735]: I0128 10:40:05.804600 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t96v7"] Jan 28 10:40:05 crc kubenswrapper[4735]: I0128 10:40:05.812163 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t96v7" Jan 28 10:40:05 crc kubenswrapper[4735]: I0128 10:40:05.816388 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t96v7"] Jan 28 10:40:05 crc kubenswrapper[4735]: I0128 10:40:05.840530 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tprsm\" (UniqueName: \"kubernetes.io/projected/ca517075-f67f-4d8a-8d45-1ed24d1fd197-kube-api-access-tprsm\") pod \"redhat-operators-t96v7\" (UID: \"ca517075-f67f-4d8a-8d45-1ed24d1fd197\") " pod="openshift-marketplace/redhat-operators-t96v7" Jan 28 10:40:05 crc kubenswrapper[4735]: I0128 10:40:05.840576 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca517075-f67f-4d8a-8d45-1ed24d1fd197-utilities\") pod \"redhat-operators-t96v7\" (UID: \"ca517075-f67f-4d8a-8d45-1ed24d1fd197\") " pod="openshift-marketplace/redhat-operators-t96v7" Jan 28 10:40:05 crc kubenswrapper[4735]: I0128 10:40:05.840599 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca517075-f67f-4d8a-8d45-1ed24d1fd197-catalog-content\") pod \"redhat-operators-t96v7\" (UID: \"ca517075-f67f-4d8a-8d45-1ed24d1fd197\") " pod="openshift-marketplace/redhat-operators-t96v7" Jan 28 10:40:05 crc kubenswrapper[4735]: I0128 10:40:05.941886 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tprsm\" (UniqueName: \"kubernetes.io/projected/ca517075-f67f-4d8a-8d45-1ed24d1fd197-kube-api-access-tprsm\") pod \"redhat-operators-t96v7\" (UID: \"ca517075-f67f-4d8a-8d45-1ed24d1fd197\") " pod="openshift-marketplace/redhat-operators-t96v7" Jan 28 10:40:05 crc kubenswrapper[4735]: I0128 10:40:05.941933 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca517075-f67f-4d8a-8d45-1ed24d1fd197-utilities\") pod \"redhat-operators-t96v7\" (UID: \"ca517075-f67f-4d8a-8d45-1ed24d1fd197\") " pod="openshift-marketplace/redhat-operators-t96v7" Jan 28 10:40:05 crc kubenswrapper[4735]: I0128 10:40:05.941963 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca517075-f67f-4d8a-8d45-1ed24d1fd197-catalog-content\") pod \"redhat-operators-t96v7\" (UID: \"ca517075-f67f-4d8a-8d45-1ed24d1fd197\") " pod="openshift-marketplace/redhat-operators-t96v7" Jan 28 10:40:05 crc kubenswrapper[4735]: I0128 10:40:05.942476 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca517075-f67f-4d8a-8d45-1ed24d1fd197-catalog-content\") pod \"redhat-operators-t96v7\" (UID: \"ca517075-f67f-4d8a-8d45-1ed24d1fd197\") " pod="openshift-marketplace/redhat-operators-t96v7" Jan 28 10:40:05 crc kubenswrapper[4735]: I0128 10:40:05.942811 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca517075-f67f-4d8a-8d45-1ed24d1fd197-utilities\") pod \"redhat-operators-t96v7\" (UID: \"ca517075-f67f-4d8a-8d45-1ed24d1fd197\") " pod="openshift-marketplace/redhat-operators-t96v7" Jan 28 10:40:05 crc kubenswrapper[4735]: I0128 10:40:05.963134 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tprsm\" (UniqueName: \"kubernetes.io/projected/ca517075-f67f-4d8a-8d45-1ed24d1fd197-kube-api-access-tprsm\") pod \"redhat-operators-t96v7\" (UID: \"ca517075-f67f-4d8a-8d45-1ed24d1fd197\") " pod="openshift-marketplace/redhat-operators-t96v7" Jan 28 10:40:05 crc kubenswrapper[4735]: I0128 10:40:05.970554 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gg9xm" event={"ID":"2c105edf-34bc-44b7-a9a3-fc0406e7e19e","Type":"ContainerStarted","Data":"08130cefa2d1d36a4f6028e1164b7be0319b2611fd22b1071e54e6a3a005d5c3"} Jan 28 10:40:06 crc kubenswrapper[4735]: I0128 10:40:06.133239 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t96v7" Jan 28 10:40:06 crc kubenswrapper[4735]: W0128 10:40:06.612133 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca517075_f67f_4d8a_8d45_1ed24d1fd197.slice/crio-00323024a5158bbda8707af031cbe93cb55f2869fecf4e34192f307fcc5f4609 WatchSource:0}: Error finding container 00323024a5158bbda8707af031cbe93cb55f2869fecf4e34192f307fcc5f4609: Status 404 returned error can't find the container with id 00323024a5158bbda8707af031cbe93cb55f2869fecf4e34192f307fcc5f4609 Jan 28 10:40:06 crc kubenswrapper[4735]: I0128 10:40:06.616176 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t96v7"] Jan 28 10:40:06 crc kubenswrapper[4735]: I0128 10:40:06.982053 4735 generic.go:334] "Generic (PLEG): container finished" podID="2c105edf-34bc-44b7-a9a3-fc0406e7e19e" containerID="08130cefa2d1d36a4f6028e1164b7be0319b2611fd22b1071e54e6a3a005d5c3" exitCode=0 Jan 28 10:40:06 crc kubenswrapper[4735]: I0128 10:40:06.982142 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gg9xm" event={"ID":"2c105edf-34bc-44b7-a9a3-fc0406e7e19e","Type":"ContainerDied","Data":"08130cefa2d1d36a4f6028e1164b7be0319b2611fd22b1071e54e6a3a005d5c3"} Jan 28 10:40:06 crc kubenswrapper[4735]: I0128 10:40:06.985172 4735 generic.go:334] "Generic (PLEG): container finished" podID="ca517075-f67f-4d8a-8d45-1ed24d1fd197" containerID="f1a01ab893b4969ed4a530cac2431b0ad8b6a89a7c6843f119267ae7817ac368" exitCode=0 Jan 28 10:40:06 crc kubenswrapper[4735]: I0128 10:40:06.985222 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t96v7" event={"ID":"ca517075-f67f-4d8a-8d45-1ed24d1fd197","Type":"ContainerDied","Data":"f1a01ab893b4969ed4a530cac2431b0ad8b6a89a7c6843f119267ae7817ac368"} Jan 28 10:40:06 crc kubenswrapper[4735]: I0128 10:40:06.985253 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t96v7" event={"ID":"ca517075-f67f-4d8a-8d45-1ed24d1fd197","Type":"ContainerStarted","Data":"00323024a5158bbda8707af031cbe93cb55f2869fecf4e34192f307fcc5f4609"} Jan 28 10:40:07 crc kubenswrapper[4735]: I0128 10:40:07.088631 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qtqt8" Jan 28 10:40:07 crc kubenswrapper[4735]: I0128 10:40:07.089095 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qtqt8" Jan 28 10:40:07 crc kubenswrapper[4735]: I0128 10:40:07.138787 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qtqt8" Jan 28 10:40:07 crc kubenswrapper[4735]: I0128 10:40:07.868045 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 10:40:07 crc kubenswrapper[4735]: I0128 10:40:07.869258 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 10:40:07 crc kubenswrapper[4735]: I0128 10:40:07.878216 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nmp45" Jan 28 10:40:07 crc kubenswrapper[4735]: I0128 10:40:07.879199 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 10:40:07 crc kubenswrapper[4735]: I0128 10:40:07.989403 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v24z\" (UniqueName: \"kubernetes.io/projected/1d4b1627-ee2b-44ec-b1e8-4909d05f4d03-kube-api-access-9v24z\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"1d4b1627-ee2b-44ec-b1e8-4909d05f4d03\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 10:40:07 crc kubenswrapper[4735]: I0128 10:40:07.989641 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"1d4b1627-ee2b-44ec-b1e8-4909d05f4d03\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 10:40:08 crc kubenswrapper[4735]: I0128 10:40:08.001582 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gg9xm" event={"ID":"2c105edf-34bc-44b7-a9a3-fc0406e7e19e","Type":"ContainerStarted","Data":"f45f1aea1312f677399531c5e7230a9e74ce44b301a964c34ae0e0f03bdfc796"} Jan 28 10:40:08 crc kubenswrapper[4735]: I0128 10:40:08.033419 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gg9xm" podStartSLOduration=2.525719166 podStartE2EDuration="5.033391473s" podCreationTimestamp="2026-01-28 10:40:03 +0000 UTC" firstStartedPulling="2026-01-28 10:40:04.960388867 +0000 UTC m=+3458.110053280" lastFinishedPulling="2026-01-28 10:40:07.468061174 +0000 UTC m=+3460.617725587" observedRunningTime="2026-01-28 10:40:08.026519063 +0000 UTC m=+3461.176183476" watchObservedRunningTime="2026-01-28 10:40:08.033391473 +0000 UTC m=+3461.183055876" Jan 28 10:40:08 crc kubenswrapper[4735]: I0128 10:40:08.065304 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qtqt8" Jan 28 10:40:08 crc kubenswrapper[4735]: I0128 10:40:08.092216 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v24z\" (UniqueName: \"kubernetes.io/projected/1d4b1627-ee2b-44ec-b1e8-4909d05f4d03-kube-api-access-9v24z\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"1d4b1627-ee2b-44ec-b1e8-4909d05f4d03\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 10:40:08 crc kubenswrapper[4735]: I0128 10:40:08.092417 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"1d4b1627-ee2b-44ec-b1e8-4909d05f4d03\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 10:40:08 crc kubenswrapper[4735]: I0128 10:40:08.092955 4735 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"1d4b1627-ee2b-44ec-b1e8-4909d05f4d03\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 10:40:08 crc kubenswrapper[4735]: I0128 10:40:08.116394 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v24z\" (UniqueName: \"kubernetes.io/projected/1d4b1627-ee2b-44ec-b1e8-4909d05f4d03-kube-api-access-9v24z\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"1d4b1627-ee2b-44ec-b1e8-4909d05f4d03\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 10:40:08 crc kubenswrapper[4735]: I0128 10:40:08.119731 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"1d4b1627-ee2b-44ec-b1e8-4909d05f4d03\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 10:40:08 crc kubenswrapper[4735]: I0128 10:40:08.199362 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 10:40:08 crc kubenswrapper[4735]: I0128 10:40:08.679435 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 10:40:09 crc kubenswrapper[4735]: I0128 10:40:09.011659 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"1d4b1627-ee2b-44ec-b1e8-4909d05f4d03","Type":"ContainerStarted","Data":"cc71c444f6af345969363e9116580e7d91dd5980055bffb6385b9512b5d220da"} Jan 28 10:40:09 crc kubenswrapper[4735]: I0128 10:40:09.013904 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t96v7" event={"ID":"ca517075-f67f-4d8a-8d45-1ed24d1fd197","Type":"ContainerStarted","Data":"8219a1854b4265d05cb6f0978fe20b5bb223543129f5aa889939d1ae9dae32bc"} Jan 28 10:40:10 crc kubenswrapper[4735]: I0128 10:40:10.201944 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qtqt8"] Jan 28 10:40:10 crc kubenswrapper[4735]: I0128 10:40:10.203097 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qtqt8" podUID="f37972be-d81f-41bc-8378-c68753e6494f" containerName="registry-server" containerID="cri-o://0f6837f1a0730fe60cf493ab1c0a36538ddac611f3a190c8d306472aa8df9a56" gracePeriod=2 Jan 28 10:40:10 crc kubenswrapper[4735]: I0128 10:40:10.670686 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qtqt8" Jan 28 10:40:10 crc kubenswrapper[4735]: I0128 10:40:10.745897 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37972be-d81f-41bc-8378-c68753e6494f-catalog-content\") pod \"f37972be-d81f-41bc-8378-c68753e6494f\" (UID: \"f37972be-d81f-41bc-8378-c68753e6494f\") " Jan 28 10:40:10 crc kubenswrapper[4735]: I0128 10:40:10.745953 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37972be-d81f-41bc-8378-c68753e6494f-utilities\") pod \"f37972be-d81f-41bc-8378-c68753e6494f\" (UID: \"f37972be-d81f-41bc-8378-c68753e6494f\") " Jan 28 10:40:10 crc kubenswrapper[4735]: I0128 10:40:10.745987 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvj8j\" (UniqueName: \"kubernetes.io/projected/f37972be-d81f-41bc-8378-c68753e6494f-kube-api-access-fvj8j\") pod \"f37972be-d81f-41bc-8378-c68753e6494f\" (UID: \"f37972be-d81f-41bc-8378-c68753e6494f\") " Jan 28 10:40:10 crc kubenswrapper[4735]: I0128 10:40:10.747976 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f37972be-d81f-41bc-8378-c68753e6494f-utilities" (OuterVolumeSpecName: "utilities") pod "f37972be-d81f-41bc-8378-c68753e6494f" (UID: "f37972be-d81f-41bc-8378-c68753e6494f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:40:10 crc kubenswrapper[4735]: I0128 10:40:10.754355 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f37972be-d81f-41bc-8378-c68753e6494f-kube-api-access-fvj8j" (OuterVolumeSpecName: "kube-api-access-fvj8j") pod "f37972be-d81f-41bc-8378-c68753e6494f" (UID: "f37972be-d81f-41bc-8378-c68753e6494f"). InnerVolumeSpecName "kube-api-access-fvj8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:40:10 crc kubenswrapper[4735]: I0128 10:40:10.767032 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f37972be-d81f-41bc-8378-c68753e6494f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f37972be-d81f-41bc-8378-c68753e6494f" (UID: "f37972be-d81f-41bc-8378-c68753e6494f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:40:10 crc kubenswrapper[4735]: I0128 10:40:10.848564 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37972be-d81f-41bc-8378-c68753e6494f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:40:10 crc kubenswrapper[4735]: I0128 10:40:10.848598 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37972be-d81f-41bc-8378-c68753e6494f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:40:10 crc kubenswrapper[4735]: I0128 10:40:10.848608 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvj8j\" (UniqueName: \"kubernetes.io/projected/f37972be-d81f-41bc-8378-c68753e6494f-kube-api-access-fvj8j\") on node \"crc\" DevicePath \"\"" Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.041513 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"1d4b1627-ee2b-44ec-b1e8-4909d05f4d03","Type":"ContainerStarted","Data":"69fb69c25b923f9943ee3b168f0bf3f361d225e9c812d3ba7be0fcc70ada7724"} Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.046382 4735 generic.go:334] "Generic (PLEG): container finished" podID="f37972be-d81f-41bc-8378-c68753e6494f" containerID="0f6837f1a0730fe60cf493ab1c0a36538ddac611f3a190c8d306472aa8df9a56" exitCode=0 Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.046428 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtqt8" event={"ID":"f37972be-d81f-41bc-8378-c68753e6494f","Type":"ContainerDied","Data":"0f6837f1a0730fe60cf493ab1c0a36538ddac611f3a190c8d306472aa8df9a56"} Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.046517 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtqt8" event={"ID":"f37972be-d81f-41bc-8378-c68753e6494f","Type":"ContainerDied","Data":"2e09506d0ba7534f51b628cefe68023f19eb1f0ea4c4c9d59cd52c9bca2e50c3"} Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.046461 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qtqt8" Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.046542 4735 scope.go:117] "RemoveContainer" containerID="0f6837f1a0730fe60cf493ab1c0a36538ddac611f3a190c8d306472aa8df9a56" Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.066584 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.7734663 podStartE2EDuration="4.066558532s" podCreationTimestamp="2026-01-28 10:40:07 +0000 UTC" firstStartedPulling="2026-01-28 10:40:08.69070018 +0000 UTC m=+3461.840364593" lastFinishedPulling="2026-01-28 10:40:09.983792452 +0000 UTC m=+3463.133456825" observedRunningTime="2026-01-28 10:40:11.0632605 +0000 UTC m=+3464.212924913" watchObservedRunningTime="2026-01-28 10:40:11.066558532 +0000 UTC m=+3464.216222925" Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.107704 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qtqt8"] Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.108567 4735 scope.go:117] "RemoveContainer" containerID="21f5e8e221fe2b155c28a7c4a00ee56eec4a8ee847e5cfbbfd5a413472a753d5" Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.127261 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qtqt8"] Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.147478 4735 scope.go:117] "RemoveContainer" containerID="c5d11ff3b96f1b4eec11fc76e2fa9cb6530205196ae024af0818971cb34c5e69" Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.219666 4735 scope.go:117] "RemoveContainer" containerID="0f6837f1a0730fe60cf493ab1c0a36538ddac611f3a190c8d306472aa8df9a56" Jan 28 10:40:11 crc kubenswrapper[4735]: E0128 10:40:11.220951 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f6837f1a0730fe60cf493ab1c0a36538ddac611f3a190c8d306472aa8df9a56\": container with ID starting with 0f6837f1a0730fe60cf493ab1c0a36538ddac611f3a190c8d306472aa8df9a56 not found: ID does not exist" containerID="0f6837f1a0730fe60cf493ab1c0a36538ddac611f3a190c8d306472aa8df9a56" Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.220980 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f6837f1a0730fe60cf493ab1c0a36538ddac611f3a190c8d306472aa8df9a56"} err="failed to get container status \"0f6837f1a0730fe60cf493ab1c0a36538ddac611f3a190c8d306472aa8df9a56\": rpc error: code = NotFound desc = could not find container \"0f6837f1a0730fe60cf493ab1c0a36538ddac611f3a190c8d306472aa8df9a56\": container with ID starting with 0f6837f1a0730fe60cf493ab1c0a36538ddac611f3a190c8d306472aa8df9a56 not found: ID does not exist" Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.220999 4735 scope.go:117] "RemoveContainer" containerID="21f5e8e221fe2b155c28a7c4a00ee56eec4a8ee847e5cfbbfd5a413472a753d5" Jan 28 10:40:11 crc kubenswrapper[4735]: E0128 10:40:11.221278 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21f5e8e221fe2b155c28a7c4a00ee56eec4a8ee847e5cfbbfd5a413472a753d5\": container with ID starting with 21f5e8e221fe2b155c28a7c4a00ee56eec4a8ee847e5cfbbfd5a413472a753d5 not found: ID does not exist" containerID="21f5e8e221fe2b155c28a7c4a00ee56eec4a8ee847e5cfbbfd5a413472a753d5" Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.221323 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21f5e8e221fe2b155c28a7c4a00ee56eec4a8ee847e5cfbbfd5a413472a753d5"} err="failed to get container status \"21f5e8e221fe2b155c28a7c4a00ee56eec4a8ee847e5cfbbfd5a413472a753d5\": rpc error: code = NotFound desc = could not find container \"21f5e8e221fe2b155c28a7c4a00ee56eec4a8ee847e5cfbbfd5a413472a753d5\": container with ID starting with 21f5e8e221fe2b155c28a7c4a00ee56eec4a8ee847e5cfbbfd5a413472a753d5 not found: ID does not exist" Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.221355 4735 scope.go:117] "RemoveContainer" containerID="c5d11ff3b96f1b4eec11fc76e2fa9cb6530205196ae024af0818971cb34c5e69" Jan 28 10:40:11 crc kubenswrapper[4735]: E0128 10:40:11.221729 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5d11ff3b96f1b4eec11fc76e2fa9cb6530205196ae024af0818971cb34c5e69\": container with ID starting with c5d11ff3b96f1b4eec11fc76e2fa9cb6530205196ae024af0818971cb34c5e69 not found: ID does not exist" containerID="c5d11ff3b96f1b4eec11fc76e2fa9cb6530205196ae024af0818971cb34c5e69" Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.221797 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5d11ff3b96f1b4eec11fc76e2fa9cb6530205196ae024af0818971cb34c5e69"} err="failed to get container status \"c5d11ff3b96f1b4eec11fc76e2fa9cb6530205196ae024af0818971cb34c5e69\": rpc error: code = NotFound desc = could not find container \"c5d11ff3b96f1b4eec11fc76e2fa9cb6530205196ae024af0818971cb34c5e69\": container with ID starting with c5d11ff3b96f1b4eec11fc76e2fa9cb6530205196ae024af0818971cb34c5e69 not found: ID does not exist" Jan 28 10:40:11 crc kubenswrapper[4735]: I0128 10:40:11.519507 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f37972be-d81f-41bc-8378-c68753e6494f" path="/var/lib/kubelet/pods/f37972be-d81f-41bc-8378-c68753e6494f/volumes" Jan 28 10:40:13 crc kubenswrapper[4735]: I0128 10:40:13.765116 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gg9xm" Jan 28 10:40:13 crc kubenswrapper[4735]: I0128 10:40:13.765555 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gg9xm" Jan 28 10:40:13 crc kubenswrapper[4735]: I0128 10:40:13.855231 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gg9xm" Jan 28 10:40:14 crc kubenswrapper[4735]: I0128 10:40:14.084793 4735 generic.go:334] "Generic (PLEG): container finished" podID="ca517075-f67f-4d8a-8d45-1ed24d1fd197" containerID="8219a1854b4265d05cb6f0978fe20b5bb223543129f5aa889939d1ae9dae32bc" exitCode=0 Jan 28 10:40:14 crc kubenswrapper[4735]: I0128 10:40:14.084877 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t96v7" event={"ID":"ca517075-f67f-4d8a-8d45-1ed24d1fd197","Type":"ContainerDied","Data":"8219a1854b4265d05cb6f0978fe20b5bb223543129f5aa889939d1ae9dae32bc"} Jan 28 10:40:14 crc kubenswrapper[4735]: I0128 10:40:14.166200 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gg9xm" Jan 28 10:40:15 crc kubenswrapper[4735]: I0128 10:40:15.099318 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t96v7" event={"ID":"ca517075-f67f-4d8a-8d45-1ed24d1fd197","Type":"ContainerStarted","Data":"9bbc70f0e4f8e746d78447c03470be12c9e6468a5035380b86ca0affe878a75a"} Jan 28 10:40:15 crc kubenswrapper[4735]: I0128 10:40:15.147529 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t96v7" podStartSLOduration=2.558944576 podStartE2EDuration="10.147500754s" podCreationTimestamp="2026-01-28 10:40:05 +0000 UTC" firstStartedPulling="2026-01-28 10:40:06.990555332 +0000 UTC m=+3460.140219725" lastFinishedPulling="2026-01-28 10:40:14.57911149 +0000 UTC m=+3467.728775903" observedRunningTime="2026-01-28 10:40:15.130431481 +0000 UTC m=+3468.280095864" watchObservedRunningTime="2026-01-28 10:40:15.147500754 +0000 UTC m=+3468.297165127" Jan 28 10:40:16 crc kubenswrapper[4735]: I0128 10:40:16.134191 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t96v7" Jan 28 10:40:16 crc kubenswrapper[4735]: I0128 10:40:16.134254 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t96v7" Jan 28 10:40:16 crc kubenswrapper[4735]: I0128 10:40:16.595786 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gg9xm"] Jan 28 10:40:16 crc kubenswrapper[4735]: I0128 10:40:16.596336 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gg9xm" podUID="2c105edf-34bc-44b7-a9a3-fc0406e7e19e" containerName="registry-server" containerID="cri-o://f45f1aea1312f677399531c5e7230a9e74ce44b301a964c34ae0e0f03bdfc796" gracePeriod=2 Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.098671 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gg9xm" Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.125235 4735 generic.go:334] "Generic (PLEG): container finished" podID="2c105edf-34bc-44b7-a9a3-fc0406e7e19e" containerID="f45f1aea1312f677399531c5e7230a9e74ce44b301a964c34ae0e0f03bdfc796" exitCode=0 Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.125290 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gg9xm" event={"ID":"2c105edf-34bc-44b7-a9a3-fc0406e7e19e","Type":"ContainerDied","Data":"f45f1aea1312f677399531c5e7230a9e74ce44b301a964c34ae0e0f03bdfc796"} Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.125306 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gg9xm" Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.125321 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gg9xm" event={"ID":"2c105edf-34bc-44b7-a9a3-fc0406e7e19e","Type":"ContainerDied","Data":"95fdf71edeed44a621826066eab847833cb9964dee252f015f430a1cc37128fd"} Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.125340 4735 scope.go:117] "RemoveContainer" containerID="f45f1aea1312f677399531c5e7230a9e74ce44b301a964c34ae0e0f03bdfc796" Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.164030 4735 scope.go:117] "RemoveContainer" containerID="08130cefa2d1d36a4f6028e1164b7be0319b2611fd22b1071e54e6a3a005d5c3" Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.187091 4735 scope.go:117] "RemoveContainer" containerID="12e43ac93b87c8f708d8243368c8fe21b6f34061b33c9ab7dc4dfd13555b2842" Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.203034 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t96v7" podUID="ca517075-f67f-4d8a-8d45-1ed24d1fd197" containerName="registry-server" probeResult="failure" output=< Jan 28 10:40:17 crc kubenswrapper[4735]: timeout: failed to connect service ":50051" within 1s Jan 28 10:40:17 crc kubenswrapper[4735]: > Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.238979 4735 scope.go:117] "RemoveContainer" containerID="f45f1aea1312f677399531c5e7230a9e74ce44b301a964c34ae0e0f03bdfc796" Jan 28 10:40:17 crc kubenswrapper[4735]: E0128 10:40:17.239332 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f45f1aea1312f677399531c5e7230a9e74ce44b301a964c34ae0e0f03bdfc796\": container with ID starting with f45f1aea1312f677399531c5e7230a9e74ce44b301a964c34ae0e0f03bdfc796 not found: ID does not exist" containerID="f45f1aea1312f677399531c5e7230a9e74ce44b301a964c34ae0e0f03bdfc796" Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.239425 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f45f1aea1312f677399531c5e7230a9e74ce44b301a964c34ae0e0f03bdfc796"} err="failed to get container status \"f45f1aea1312f677399531c5e7230a9e74ce44b301a964c34ae0e0f03bdfc796\": rpc error: code = NotFound desc = could not find container \"f45f1aea1312f677399531c5e7230a9e74ce44b301a964c34ae0e0f03bdfc796\": container with ID starting with f45f1aea1312f677399531c5e7230a9e74ce44b301a964c34ae0e0f03bdfc796 not found: ID does not exist" Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.239518 4735 scope.go:117] "RemoveContainer" containerID="08130cefa2d1d36a4f6028e1164b7be0319b2611fd22b1071e54e6a3a005d5c3" Jan 28 10:40:17 crc kubenswrapper[4735]: E0128 10:40:17.240069 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08130cefa2d1d36a4f6028e1164b7be0319b2611fd22b1071e54e6a3a005d5c3\": container with ID starting with 08130cefa2d1d36a4f6028e1164b7be0319b2611fd22b1071e54e6a3a005d5c3 not found: ID does not exist" containerID="08130cefa2d1d36a4f6028e1164b7be0319b2611fd22b1071e54e6a3a005d5c3" Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.240119 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08130cefa2d1d36a4f6028e1164b7be0319b2611fd22b1071e54e6a3a005d5c3"} err="failed to get container status \"08130cefa2d1d36a4f6028e1164b7be0319b2611fd22b1071e54e6a3a005d5c3\": rpc error: code = NotFound desc = could not find container \"08130cefa2d1d36a4f6028e1164b7be0319b2611fd22b1071e54e6a3a005d5c3\": container with ID starting with 08130cefa2d1d36a4f6028e1164b7be0319b2611fd22b1071e54e6a3a005d5c3 not found: ID does not exist" Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.240147 4735 scope.go:117] "RemoveContainer" containerID="12e43ac93b87c8f708d8243368c8fe21b6f34061b33c9ab7dc4dfd13555b2842" Jan 28 10:40:17 crc kubenswrapper[4735]: E0128 10:40:17.240452 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12e43ac93b87c8f708d8243368c8fe21b6f34061b33c9ab7dc4dfd13555b2842\": container with ID starting with 12e43ac93b87c8f708d8243368c8fe21b6f34061b33c9ab7dc4dfd13555b2842 not found: ID does not exist" containerID="12e43ac93b87c8f708d8243368c8fe21b6f34061b33c9ab7dc4dfd13555b2842" Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.240478 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12e43ac93b87c8f708d8243368c8fe21b6f34061b33c9ab7dc4dfd13555b2842"} err="failed to get container status \"12e43ac93b87c8f708d8243368c8fe21b6f34061b33c9ab7dc4dfd13555b2842\": rpc error: code = NotFound desc = could not find container \"12e43ac93b87c8f708d8243368c8fe21b6f34061b33c9ab7dc4dfd13555b2842\": container with ID starting with 12e43ac93b87c8f708d8243368c8fe21b6f34061b33c9ab7dc4dfd13555b2842 not found: ID does not exist" Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.284459 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-utilities\") pod \"2c105edf-34bc-44b7-a9a3-fc0406e7e19e\" (UID: \"2c105edf-34bc-44b7-a9a3-fc0406e7e19e\") " Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.284864 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-catalog-content\") pod \"2c105edf-34bc-44b7-a9a3-fc0406e7e19e\" (UID: \"2c105edf-34bc-44b7-a9a3-fc0406e7e19e\") " Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.285029 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgw62\" (UniqueName: \"kubernetes.io/projected/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-kube-api-access-xgw62\") pod \"2c105edf-34bc-44b7-a9a3-fc0406e7e19e\" (UID: \"2c105edf-34bc-44b7-a9a3-fc0406e7e19e\") " Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.285281 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-utilities" (OuterVolumeSpecName: "utilities") pod "2c105edf-34bc-44b7-a9a3-fc0406e7e19e" (UID: "2c105edf-34bc-44b7-a9a3-fc0406e7e19e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.285837 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.294976 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-kube-api-access-xgw62" (OuterVolumeSpecName: "kube-api-access-xgw62") pod "2c105edf-34bc-44b7-a9a3-fc0406e7e19e" (UID: "2c105edf-34bc-44b7-a9a3-fc0406e7e19e"). InnerVolumeSpecName "kube-api-access-xgw62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.338551 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c105edf-34bc-44b7-a9a3-fc0406e7e19e" (UID: "2c105edf-34bc-44b7-a9a3-fc0406e7e19e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.388001 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.388042 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgw62\" (UniqueName: \"kubernetes.io/projected/2c105edf-34bc-44b7-a9a3-fc0406e7e19e-kube-api-access-xgw62\") on node \"crc\" DevicePath \"\"" Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.466490 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gg9xm"] Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.480091 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gg9xm"] Jan 28 10:40:17 crc kubenswrapper[4735]: I0128 10:40:17.520343 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c105edf-34bc-44b7-a9a3-fc0406e7e19e" path="/var/lib/kubelet/pods/2c105edf-34bc-44b7-a9a3-fc0406e7e19e/volumes" Jan 28 10:40:26 crc kubenswrapper[4735]: I0128 10:40:26.206270 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t96v7" Jan 28 10:40:26 crc kubenswrapper[4735]: I0128 10:40:26.280332 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t96v7" Jan 28 10:40:26 crc kubenswrapper[4735]: I0128 10:40:26.450062 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t96v7"] Jan 28 10:40:27 crc kubenswrapper[4735]: I0128 10:40:27.246002 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t96v7" podUID="ca517075-f67f-4d8a-8d45-1ed24d1fd197" containerName="registry-server" containerID="cri-o://9bbc70f0e4f8e746d78447c03470be12c9e6468a5035380b86ca0affe878a75a" gracePeriod=2 Jan 28 10:40:27 crc kubenswrapper[4735]: I0128 10:40:27.730176 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t96v7" Jan 28 10:40:27 crc kubenswrapper[4735]: I0128 10:40:27.808333 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tprsm\" (UniqueName: \"kubernetes.io/projected/ca517075-f67f-4d8a-8d45-1ed24d1fd197-kube-api-access-tprsm\") pod \"ca517075-f67f-4d8a-8d45-1ed24d1fd197\" (UID: \"ca517075-f67f-4d8a-8d45-1ed24d1fd197\") " Jan 28 10:40:27 crc kubenswrapper[4735]: I0128 10:40:27.809371 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca517075-f67f-4d8a-8d45-1ed24d1fd197-utilities\") pod \"ca517075-f67f-4d8a-8d45-1ed24d1fd197\" (UID: \"ca517075-f67f-4d8a-8d45-1ed24d1fd197\") " Jan 28 10:40:27 crc kubenswrapper[4735]: I0128 10:40:27.809473 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca517075-f67f-4d8a-8d45-1ed24d1fd197-catalog-content\") pod \"ca517075-f67f-4d8a-8d45-1ed24d1fd197\" (UID: \"ca517075-f67f-4d8a-8d45-1ed24d1fd197\") " Jan 28 10:40:27 crc kubenswrapper[4735]: I0128 10:40:27.811078 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca517075-f67f-4d8a-8d45-1ed24d1fd197-utilities" (OuterVolumeSpecName: "utilities") pod "ca517075-f67f-4d8a-8d45-1ed24d1fd197" (UID: "ca517075-f67f-4d8a-8d45-1ed24d1fd197"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:40:27 crc kubenswrapper[4735]: I0128 10:40:27.816117 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca517075-f67f-4d8a-8d45-1ed24d1fd197-kube-api-access-tprsm" (OuterVolumeSpecName: "kube-api-access-tprsm") pod "ca517075-f67f-4d8a-8d45-1ed24d1fd197" (UID: "ca517075-f67f-4d8a-8d45-1ed24d1fd197"). InnerVolumeSpecName "kube-api-access-tprsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:40:27 crc kubenswrapper[4735]: I0128 10:40:27.912451 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca517075-f67f-4d8a-8d45-1ed24d1fd197-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:40:27 crc kubenswrapper[4735]: I0128 10:40:27.912508 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tprsm\" (UniqueName: \"kubernetes.io/projected/ca517075-f67f-4d8a-8d45-1ed24d1fd197-kube-api-access-tprsm\") on node \"crc\" DevicePath \"\"" Jan 28 10:40:27 crc kubenswrapper[4735]: I0128 10:40:27.978319 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca517075-f67f-4d8a-8d45-1ed24d1fd197-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca517075-f67f-4d8a-8d45-1ed24d1fd197" (UID: "ca517075-f67f-4d8a-8d45-1ed24d1fd197"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:40:28 crc kubenswrapper[4735]: I0128 10:40:28.013881 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca517075-f67f-4d8a-8d45-1ed24d1fd197-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:40:28 crc kubenswrapper[4735]: I0128 10:40:28.264176 4735 generic.go:334] "Generic (PLEG): container finished" podID="ca517075-f67f-4d8a-8d45-1ed24d1fd197" containerID="9bbc70f0e4f8e746d78447c03470be12c9e6468a5035380b86ca0affe878a75a" exitCode=0 Jan 28 10:40:28 crc kubenswrapper[4735]: I0128 10:40:28.264230 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t96v7" event={"ID":"ca517075-f67f-4d8a-8d45-1ed24d1fd197","Type":"ContainerDied","Data":"9bbc70f0e4f8e746d78447c03470be12c9e6468a5035380b86ca0affe878a75a"} Jan 28 10:40:28 crc kubenswrapper[4735]: I0128 10:40:28.264292 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t96v7" event={"ID":"ca517075-f67f-4d8a-8d45-1ed24d1fd197","Type":"ContainerDied","Data":"00323024a5158bbda8707af031cbe93cb55f2869fecf4e34192f307fcc5f4609"} Jan 28 10:40:28 crc kubenswrapper[4735]: I0128 10:40:28.264323 4735 scope.go:117] "RemoveContainer" containerID="9bbc70f0e4f8e746d78447c03470be12c9e6468a5035380b86ca0affe878a75a" Jan 28 10:40:28 crc kubenswrapper[4735]: I0128 10:40:28.264334 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t96v7" Jan 28 10:40:28 crc kubenswrapper[4735]: I0128 10:40:28.295451 4735 scope.go:117] "RemoveContainer" containerID="8219a1854b4265d05cb6f0978fe20b5bb223543129f5aa889939d1ae9dae32bc" Jan 28 10:40:28 crc kubenswrapper[4735]: I0128 10:40:28.324719 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t96v7"] Jan 28 10:40:28 crc kubenswrapper[4735]: I0128 10:40:28.334208 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t96v7"] Jan 28 10:40:28 crc kubenswrapper[4735]: I0128 10:40:28.353029 4735 scope.go:117] "RemoveContainer" containerID="f1a01ab893b4969ed4a530cac2431b0ad8b6a89a7c6843f119267ae7817ac368" Jan 28 10:40:28 crc kubenswrapper[4735]: I0128 10:40:28.410259 4735 scope.go:117] "RemoveContainer" containerID="9bbc70f0e4f8e746d78447c03470be12c9e6468a5035380b86ca0affe878a75a" Jan 28 10:40:28 crc kubenswrapper[4735]: E0128 10:40:28.411344 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bbc70f0e4f8e746d78447c03470be12c9e6468a5035380b86ca0affe878a75a\": container with ID starting with 9bbc70f0e4f8e746d78447c03470be12c9e6468a5035380b86ca0affe878a75a not found: ID does not exist" containerID="9bbc70f0e4f8e746d78447c03470be12c9e6468a5035380b86ca0affe878a75a" Jan 28 10:40:28 crc kubenswrapper[4735]: I0128 10:40:28.411414 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bbc70f0e4f8e746d78447c03470be12c9e6468a5035380b86ca0affe878a75a"} err="failed to get container status \"9bbc70f0e4f8e746d78447c03470be12c9e6468a5035380b86ca0affe878a75a\": rpc error: code = NotFound desc = could not find container \"9bbc70f0e4f8e746d78447c03470be12c9e6468a5035380b86ca0affe878a75a\": container with ID starting with 9bbc70f0e4f8e746d78447c03470be12c9e6468a5035380b86ca0affe878a75a not found: ID does not exist" Jan 28 10:40:28 crc kubenswrapper[4735]: I0128 10:40:28.411459 4735 scope.go:117] "RemoveContainer" containerID="8219a1854b4265d05cb6f0978fe20b5bb223543129f5aa889939d1ae9dae32bc" Jan 28 10:40:28 crc kubenswrapper[4735]: E0128 10:40:28.415182 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8219a1854b4265d05cb6f0978fe20b5bb223543129f5aa889939d1ae9dae32bc\": container with ID starting with 8219a1854b4265d05cb6f0978fe20b5bb223543129f5aa889939d1ae9dae32bc not found: ID does not exist" containerID="8219a1854b4265d05cb6f0978fe20b5bb223543129f5aa889939d1ae9dae32bc" Jan 28 10:40:28 crc kubenswrapper[4735]: I0128 10:40:28.415250 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8219a1854b4265d05cb6f0978fe20b5bb223543129f5aa889939d1ae9dae32bc"} err="failed to get container status \"8219a1854b4265d05cb6f0978fe20b5bb223543129f5aa889939d1ae9dae32bc\": rpc error: code = NotFound desc = could not find container \"8219a1854b4265d05cb6f0978fe20b5bb223543129f5aa889939d1ae9dae32bc\": container with ID starting with 8219a1854b4265d05cb6f0978fe20b5bb223543129f5aa889939d1ae9dae32bc not found: ID does not exist" Jan 28 10:40:28 crc kubenswrapper[4735]: I0128 10:40:28.415295 4735 scope.go:117] "RemoveContainer" containerID="f1a01ab893b4969ed4a530cac2431b0ad8b6a89a7c6843f119267ae7817ac368" Jan 28 10:40:28 crc kubenswrapper[4735]: E0128 10:40:28.418454 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1a01ab893b4969ed4a530cac2431b0ad8b6a89a7c6843f119267ae7817ac368\": container with ID starting with f1a01ab893b4969ed4a530cac2431b0ad8b6a89a7c6843f119267ae7817ac368 not found: ID does not exist" containerID="f1a01ab893b4969ed4a530cac2431b0ad8b6a89a7c6843f119267ae7817ac368" Jan 28 10:40:28 crc kubenswrapper[4735]: I0128 10:40:28.418496 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1a01ab893b4969ed4a530cac2431b0ad8b6a89a7c6843f119267ae7817ac368"} err="failed to get container status \"f1a01ab893b4969ed4a530cac2431b0ad8b6a89a7c6843f119267ae7817ac368\": rpc error: code = NotFound desc = could not find container \"f1a01ab893b4969ed4a530cac2431b0ad8b6a89a7c6843f119267ae7817ac368\": container with ID starting with f1a01ab893b4969ed4a530cac2431b0ad8b6a89a7c6843f119267ae7817ac368 not found: ID does not exist" Jan 28 10:40:29 crc kubenswrapper[4735]: I0128 10:40:29.520370 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca517075-f67f-4d8a-8d45-1ed24d1fd197" path="/var/lib/kubelet/pods/ca517075-f67f-4d8a-8d45-1ed24d1fd197/volumes" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.384183 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jg5dj/must-gather-g8wtq"] Jan 28 10:40:32 crc kubenswrapper[4735]: E0128 10:40:32.385008 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37972be-d81f-41bc-8378-c68753e6494f" containerName="registry-server" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.385023 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37972be-d81f-41bc-8378-c68753e6494f" containerName="registry-server" Jan 28 10:40:32 crc kubenswrapper[4735]: E0128 10:40:32.385043 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c105edf-34bc-44b7-a9a3-fc0406e7e19e" containerName="registry-server" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.385051 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c105edf-34bc-44b7-a9a3-fc0406e7e19e" containerName="registry-server" Jan 28 10:40:32 crc kubenswrapper[4735]: E0128 10:40:32.385074 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37972be-d81f-41bc-8378-c68753e6494f" containerName="extract-utilities" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.385082 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37972be-d81f-41bc-8378-c68753e6494f" containerName="extract-utilities" Jan 28 10:40:32 crc kubenswrapper[4735]: E0128 10:40:32.385099 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca517075-f67f-4d8a-8d45-1ed24d1fd197" containerName="extract-content" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.385107 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca517075-f67f-4d8a-8d45-1ed24d1fd197" containerName="extract-content" Jan 28 10:40:32 crc kubenswrapper[4735]: E0128 10:40:32.385127 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37972be-d81f-41bc-8378-c68753e6494f" containerName="extract-content" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.385134 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37972be-d81f-41bc-8378-c68753e6494f" containerName="extract-content" Jan 28 10:40:32 crc kubenswrapper[4735]: E0128 10:40:32.385156 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca517075-f67f-4d8a-8d45-1ed24d1fd197" containerName="registry-server" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.385163 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca517075-f67f-4d8a-8d45-1ed24d1fd197" containerName="registry-server" Jan 28 10:40:32 crc kubenswrapper[4735]: E0128 10:40:32.385175 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c105edf-34bc-44b7-a9a3-fc0406e7e19e" containerName="extract-utilities" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.385183 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c105edf-34bc-44b7-a9a3-fc0406e7e19e" containerName="extract-utilities" Jan 28 10:40:32 crc kubenswrapper[4735]: E0128 10:40:32.385198 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca517075-f67f-4d8a-8d45-1ed24d1fd197" containerName="extract-utilities" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.385207 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca517075-f67f-4d8a-8d45-1ed24d1fd197" containerName="extract-utilities" Jan 28 10:40:32 crc kubenswrapper[4735]: E0128 10:40:32.385216 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c105edf-34bc-44b7-a9a3-fc0406e7e19e" containerName="extract-content" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.385223 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c105edf-34bc-44b7-a9a3-fc0406e7e19e" containerName="extract-content" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.385430 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f37972be-d81f-41bc-8378-c68753e6494f" containerName="registry-server" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.385459 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca517075-f67f-4d8a-8d45-1ed24d1fd197" containerName="registry-server" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.385474 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c105edf-34bc-44b7-a9a3-fc0406e7e19e" containerName="registry-server" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.386652 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg5dj/must-gather-g8wtq" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.394857 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jg5dj"/"kube-root-ca.crt" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.395400 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jg5dj"/"openshift-service-ca.crt" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.404636 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jg5dj/must-gather-g8wtq"] Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.510024 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d10fba85-6c88-4282-85db-b2f5a283943f-must-gather-output\") pod \"must-gather-g8wtq\" (UID: \"d10fba85-6c88-4282-85db-b2f5a283943f\") " pod="openshift-must-gather-jg5dj/must-gather-g8wtq" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.510450 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxd88\" (UniqueName: \"kubernetes.io/projected/d10fba85-6c88-4282-85db-b2f5a283943f-kube-api-access-bxd88\") pod \"must-gather-g8wtq\" (UID: \"d10fba85-6c88-4282-85db-b2f5a283943f\") " pod="openshift-must-gather-jg5dj/must-gather-g8wtq" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.611939 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxd88\" (UniqueName: \"kubernetes.io/projected/d10fba85-6c88-4282-85db-b2f5a283943f-kube-api-access-bxd88\") pod \"must-gather-g8wtq\" (UID: \"d10fba85-6c88-4282-85db-b2f5a283943f\") " pod="openshift-must-gather-jg5dj/must-gather-g8wtq" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.612034 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d10fba85-6c88-4282-85db-b2f5a283943f-must-gather-output\") pod \"must-gather-g8wtq\" (UID: \"d10fba85-6c88-4282-85db-b2f5a283943f\") " pod="openshift-must-gather-jg5dj/must-gather-g8wtq" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.612660 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d10fba85-6c88-4282-85db-b2f5a283943f-must-gather-output\") pod \"must-gather-g8wtq\" (UID: \"d10fba85-6c88-4282-85db-b2f5a283943f\") " pod="openshift-must-gather-jg5dj/must-gather-g8wtq" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.642712 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxd88\" (UniqueName: \"kubernetes.io/projected/d10fba85-6c88-4282-85db-b2f5a283943f-kube-api-access-bxd88\") pod \"must-gather-g8wtq\" (UID: \"d10fba85-6c88-4282-85db-b2f5a283943f\") " pod="openshift-must-gather-jg5dj/must-gather-g8wtq" Jan 28 10:40:32 crc kubenswrapper[4735]: I0128 10:40:32.712579 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg5dj/must-gather-g8wtq" Jan 28 10:40:33 crc kubenswrapper[4735]: I0128 10:40:33.161930 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jg5dj/must-gather-g8wtq"] Jan 28 10:40:33 crc kubenswrapper[4735]: I0128 10:40:33.329912 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg5dj/must-gather-g8wtq" event={"ID":"d10fba85-6c88-4282-85db-b2f5a283943f","Type":"ContainerStarted","Data":"e820609a217c96c18877a004ab3d7b4ba6b6497e3a58ed8fb2bbccada7b46e3b"} Jan 28 10:40:39 crc kubenswrapper[4735]: I0128 10:40:39.403026 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg5dj/must-gather-g8wtq" event={"ID":"d10fba85-6c88-4282-85db-b2f5a283943f","Type":"ContainerStarted","Data":"1b847d938fb9489bad1e20869fbfc9a264ea55050977a3eba671df6fc676e89f"} Jan 28 10:40:40 crc kubenswrapper[4735]: I0128 10:40:40.422523 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg5dj/must-gather-g8wtq" event={"ID":"d10fba85-6c88-4282-85db-b2f5a283943f","Type":"ContainerStarted","Data":"10dd58eeb153a58578c47c52717ce45c535c3cf722868dcef2f2af8d5ec8d997"} Jan 28 10:40:42 crc kubenswrapper[4735]: I0128 10:40:42.931869 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jg5dj/must-gather-g8wtq" podStartSLOduration=5.105764936 podStartE2EDuration="10.931849938s" podCreationTimestamp="2026-01-28 10:40:32 +0000 UTC" firstStartedPulling="2026-01-28 10:40:33.169392499 +0000 UTC m=+3486.319056872" lastFinishedPulling="2026-01-28 10:40:38.995477501 +0000 UTC m=+3492.145141874" observedRunningTime="2026-01-28 10:40:40.451910996 +0000 UTC m=+3493.601575439" watchObservedRunningTime="2026-01-28 10:40:42.931849938 +0000 UTC m=+3496.081514321" Jan 28 10:40:42 crc kubenswrapper[4735]: I0128 10:40:42.934242 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jg5dj/crc-debug-st2c6"] Jan 28 10:40:42 crc kubenswrapper[4735]: I0128 10:40:42.935571 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg5dj/crc-debug-st2c6" Jan 28 10:40:42 crc kubenswrapper[4735]: I0128 10:40:42.937934 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jg5dj"/"default-dockercfg-sff7d" Jan 28 10:40:43 crc kubenswrapper[4735]: I0128 10:40:43.021296 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dace56df-4db1-4af9-9fcb-5d855176e70a-host\") pod \"crc-debug-st2c6\" (UID: \"dace56df-4db1-4af9-9fcb-5d855176e70a\") " pod="openshift-must-gather-jg5dj/crc-debug-st2c6" Jan 28 10:40:43 crc kubenswrapper[4735]: I0128 10:40:43.021371 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5kbq\" (UniqueName: \"kubernetes.io/projected/dace56df-4db1-4af9-9fcb-5d855176e70a-kube-api-access-b5kbq\") pod \"crc-debug-st2c6\" (UID: \"dace56df-4db1-4af9-9fcb-5d855176e70a\") " pod="openshift-must-gather-jg5dj/crc-debug-st2c6" Jan 28 10:40:43 crc kubenswrapper[4735]: I0128 10:40:43.123833 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dace56df-4db1-4af9-9fcb-5d855176e70a-host\") pod \"crc-debug-st2c6\" (UID: \"dace56df-4db1-4af9-9fcb-5d855176e70a\") " pod="openshift-must-gather-jg5dj/crc-debug-st2c6" Jan 28 10:40:43 crc kubenswrapper[4735]: I0128 10:40:43.123893 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5kbq\" (UniqueName: \"kubernetes.io/projected/dace56df-4db1-4af9-9fcb-5d855176e70a-kube-api-access-b5kbq\") pod \"crc-debug-st2c6\" (UID: \"dace56df-4db1-4af9-9fcb-5d855176e70a\") " pod="openshift-must-gather-jg5dj/crc-debug-st2c6" Jan 28 10:40:43 crc kubenswrapper[4735]: I0128 10:40:43.124383 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dace56df-4db1-4af9-9fcb-5d855176e70a-host\") pod \"crc-debug-st2c6\" (UID: \"dace56df-4db1-4af9-9fcb-5d855176e70a\") " pod="openshift-must-gather-jg5dj/crc-debug-st2c6" Jan 28 10:40:43 crc kubenswrapper[4735]: I0128 10:40:43.155803 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5kbq\" (UniqueName: \"kubernetes.io/projected/dace56df-4db1-4af9-9fcb-5d855176e70a-kube-api-access-b5kbq\") pod \"crc-debug-st2c6\" (UID: \"dace56df-4db1-4af9-9fcb-5d855176e70a\") " pod="openshift-must-gather-jg5dj/crc-debug-st2c6" Jan 28 10:40:43 crc kubenswrapper[4735]: I0128 10:40:43.255640 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg5dj/crc-debug-st2c6" Jan 28 10:40:43 crc kubenswrapper[4735]: W0128 10:40:43.306392 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddace56df_4db1_4af9_9fcb_5d855176e70a.slice/crio-7ffe8c45f5b9184246b061c79bc421f90044805189dc292b87c56cae6e97a72e WatchSource:0}: Error finding container 7ffe8c45f5b9184246b061c79bc421f90044805189dc292b87c56cae6e97a72e: Status 404 returned error can't find the container with id 7ffe8c45f5b9184246b061c79bc421f90044805189dc292b87c56cae6e97a72e Jan 28 10:40:43 crc kubenswrapper[4735]: I0128 10:40:43.449108 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg5dj/crc-debug-st2c6" event={"ID":"dace56df-4db1-4af9-9fcb-5d855176e70a","Type":"ContainerStarted","Data":"7ffe8c45f5b9184246b061c79bc421f90044805189dc292b87c56cae6e97a72e"} Jan 28 10:40:56 crc kubenswrapper[4735]: I0128 10:40:56.564416 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg5dj/crc-debug-st2c6" event={"ID":"dace56df-4db1-4af9-9fcb-5d855176e70a","Type":"ContainerStarted","Data":"762822defa0d1aec733d9f8321191e7bd5f552b59072181e82328e682f7791a9"} Jan 28 10:40:56 crc kubenswrapper[4735]: I0128 10:40:56.581392 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jg5dj/crc-debug-st2c6" podStartSLOduration=2.167576909 podStartE2EDuration="14.581371614s" podCreationTimestamp="2026-01-28 10:40:42 +0000 UTC" firstStartedPulling="2026-01-28 10:40:43.312576524 +0000 UTC m=+3496.462240897" lastFinishedPulling="2026-01-28 10:40:55.726371239 +0000 UTC m=+3508.876035602" observedRunningTime="2026-01-28 10:40:56.577054658 +0000 UTC m=+3509.726719041" watchObservedRunningTime="2026-01-28 10:40:56.581371614 +0000 UTC m=+3509.731035987" Jan 28 10:41:35 crc kubenswrapper[4735]: I0128 10:41:35.430485 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:41:35 crc kubenswrapper[4735]: I0128 10:41:35.431181 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:41:38 crc kubenswrapper[4735]: I0128 10:41:38.212278 4735 generic.go:334] "Generic (PLEG): container finished" podID="dace56df-4db1-4af9-9fcb-5d855176e70a" containerID="762822defa0d1aec733d9f8321191e7bd5f552b59072181e82328e682f7791a9" exitCode=0 Jan 28 10:41:38 crc kubenswrapper[4735]: I0128 10:41:38.212398 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg5dj/crc-debug-st2c6" event={"ID":"dace56df-4db1-4af9-9fcb-5d855176e70a","Type":"ContainerDied","Data":"762822defa0d1aec733d9f8321191e7bd5f552b59072181e82328e682f7791a9"} Jan 28 10:41:39 crc kubenswrapper[4735]: I0128 10:41:39.325647 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg5dj/crc-debug-st2c6" Jan 28 10:41:39 crc kubenswrapper[4735]: I0128 10:41:39.362168 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jg5dj/crc-debug-st2c6"] Jan 28 10:41:39 crc kubenswrapper[4735]: I0128 10:41:39.371813 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jg5dj/crc-debug-st2c6"] Jan 28 10:41:39 crc kubenswrapper[4735]: I0128 10:41:39.430684 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dace56df-4db1-4af9-9fcb-5d855176e70a-host\") pod \"dace56df-4db1-4af9-9fcb-5d855176e70a\" (UID: \"dace56df-4db1-4af9-9fcb-5d855176e70a\") " Jan 28 10:41:39 crc kubenswrapper[4735]: I0128 10:41:39.430767 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dace56df-4db1-4af9-9fcb-5d855176e70a-host" (OuterVolumeSpecName: "host") pod "dace56df-4db1-4af9-9fcb-5d855176e70a" (UID: "dace56df-4db1-4af9-9fcb-5d855176e70a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 10:41:39 crc kubenswrapper[4735]: I0128 10:41:39.430871 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5kbq\" (UniqueName: \"kubernetes.io/projected/dace56df-4db1-4af9-9fcb-5d855176e70a-kube-api-access-b5kbq\") pod \"dace56df-4db1-4af9-9fcb-5d855176e70a\" (UID: \"dace56df-4db1-4af9-9fcb-5d855176e70a\") " Jan 28 10:41:39 crc kubenswrapper[4735]: I0128 10:41:39.431248 4735 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dace56df-4db1-4af9-9fcb-5d855176e70a-host\") on node \"crc\" DevicePath \"\"" Jan 28 10:41:39 crc kubenswrapper[4735]: I0128 10:41:39.436363 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dace56df-4db1-4af9-9fcb-5d855176e70a-kube-api-access-b5kbq" (OuterVolumeSpecName: "kube-api-access-b5kbq") pod "dace56df-4db1-4af9-9fcb-5d855176e70a" (UID: "dace56df-4db1-4af9-9fcb-5d855176e70a"). InnerVolumeSpecName "kube-api-access-b5kbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:41:39 crc kubenswrapper[4735]: I0128 10:41:39.515226 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dace56df-4db1-4af9-9fcb-5d855176e70a" path="/var/lib/kubelet/pods/dace56df-4db1-4af9-9fcb-5d855176e70a/volumes" Jan 28 10:41:39 crc kubenswrapper[4735]: I0128 10:41:39.533440 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5kbq\" (UniqueName: \"kubernetes.io/projected/dace56df-4db1-4af9-9fcb-5d855176e70a-kube-api-access-b5kbq\") on node \"crc\" DevicePath \"\"" Jan 28 10:41:40 crc kubenswrapper[4735]: I0128 10:41:40.254510 4735 scope.go:117] "RemoveContainer" containerID="762822defa0d1aec733d9f8321191e7bd5f552b59072181e82328e682f7791a9" Jan 28 10:41:40 crc kubenswrapper[4735]: I0128 10:41:40.254522 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg5dj/crc-debug-st2c6" Jan 28 10:41:40 crc kubenswrapper[4735]: I0128 10:41:40.544481 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jg5dj/crc-debug-q8rkh"] Jan 28 10:41:40 crc kubenswrapper[4735]: E0128 10:41:40.546142 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dace56df-4db1-4af9-9fcb-5d855176e70a" containerName="container-00" Jan 28 10:41:40 crc kubenswrapper[4735]: I0128 10:41:40.546258 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="dace56df-4db1-4af9-9fcb-5d855176e70a" containerName="container-00" Jan 28 10:41:40 crc kubenswrapper[4735]: I0128 10:41:40.546625 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="dace56df-4db1-4af9-9fcb-5d855176e70a" containerName="container-00" Jan 28 10:41:40 crc kubenswrapper[4735]: I0128 10:41:40.547501 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg5dj/crc-debug-q8rkh" Jan 28 10:41:40 crc kubenswrapper[4735]: I0128 10:41:40.549879 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jg5dj"/"default-dockercfg-sff7d" Jan 28 10:41:40 crc kubenswrapper[4735]: I0128 10:41:40.657025 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37014874-153f-4c7b-a44e-5f31c8c0ba71-host\") pod \"crc-debug-q8rkh\" (UID: \"37014874-153f-4c7b-a44e-5f31c8c0ba71\") " pod="openshift-must-gather-jg5dj/crc-debug-q8rkh" Jan 28 10:41:40 crc kubenswrapper[4735]: I0128 10:41:40.657628 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzghv\" (UniqueName: \"kubernetes.io/projected/37014874-153f-4c7b-a44e-5f31c8c0ba71-kube-api-access-mzghv\") pod \"crc-debug-q8rkh\" (UID: \"37014874-153f-4c7b-a44e-5f31c8c0ba71\") " pod="openshift-must-gather-jg5dj/crc-debug-q8rkh" Jan 28 10:41:40 crc kubenswrapper[4735]: I0128 10:41:40.759074 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzghv\" (UniqueName: \"kubernetes.io/projected/37014874-153f-4c7b-a44e-5f31c8c0ba71-kube-api-access-mzghv\") pod \"crc-debug-q8rkh\" (UID: \"37014874-153f-4c7b-a44e-5f31c8c0ba71\") " pod="openshift-must-gather-jg5dj/crc-debug-q8rkh" Jan 28 10:41:40 crc kubenswrapper[4735]: I0128 10:41:40.759203 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37014874-153f-4c7b-a44e-5f31c8c0ba71-host\") pod \"crc-debug-q8rkh\" (UID: \"37014874-153f-4c7b-a44e-5f31c8c0ba71\") " pod="openshift-must-gather-jg5dj/crc-debug-q8rkh" Jan 28 10:41:40 crc kubenswrapper[4735]: I0128 10:41:40.759352 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37014874-153f-4c7b-a44e-5f31c8c0ba71-host\") pod \"crc-debug-q8rkh\" (UID: \"37014874-153f-4c7b-a44e-5f31c8c0ba71\") " pod="openshift-must-gather-jg5dj/crc-debug-q8rkh" Jan 28 10:41:40 crc kubenswrapper[4735]: I0128 10:41:40.785128 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzghv\" (UniqueName: \"kubernetes.io/projected/37014874-153f-4c7b-a44e-5f31c8c0ba71-kube-api-access-mzghv\") pod \"crc-debug-q8rkh\" (UID: \"37014874-153f-4c7b-a44e-5f31c8c0ba71\") " pod="openshift-must-gather-jg5dj/crc-debug-q8rkh" Jan 28 10:41:40 crc kubenswrapper[4735]: I0128 10:41:40.866881 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg5dj/crc-debug-q8rkh" Jan 28 10:41:41 crc kubenswrapper[4735]: I0128 10:41:41.271032 4735 generic.go:334] "Generic (PLEG): container finished" podID="37014874-153f-4c7b-a44e-5f31c8c0ba71" containerID="14c3ec9cbab122f01263879e59dd188f7ea28c08375dc079449b098176c1051e" exitCode=0 Jan 28 10:41:41 crc kubenswrapper[4735]: I0128 10:41:41.271093 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg5dj/crc-debug-q8rkh" event={"ID":"37014874-153f-4c7b-a44e-5f31c8c0ba71","Type":"ContainerDied","Data":"14c3ec9cbab122f01263879e59dd188f7ea28c08375dc079449b098176c1051e"} Jan 28 10:41:41 crc kubenswrapper[4735]: I0128 10:41:41.271474 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg5dj/crc-debug-q8rkh" event={"ID":"37014874-153f-4c7b-a44e-5f31c8c0ba71","Type":"ContainerStarted","Data":"ee8a15a45c3ff6175e6ad929108f20b5ecfa2282b76e587f91b6a52583c40b78"} Jan 28 10:41:41 crc kubenswrapper[4735]: I0128 10:41:41.887731 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jg5dj/crc-debug-q8rkh"] Jan 28 10:41:41 crc kubenswrapper[4735]: I0128 10:41:41.895963 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jg5dj/crc-debug-q8rkh"] Jan 28 10:41:42 crc kubenswrapper[4735]: I0128 10:41:42.419871 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg5dj/crc-debug-q8rkh" Jan 28 10:41:42 crc kubenswrapper[4735]: I0128 10:41:42.486505 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37014874-153f-4c7b-a44e-5f31c8c0ba71-host\") pod \"37014874-153f-4c7b-a44e-5f31c8c0ba71\" (UID: \"37014874-153f-4c7b-a44e-5f31c8c0ba71\") " Jan 28 10:41:42 crc kubenswrapper[4735]: I0128 10:41:42.486565 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzghv\" (UniqueName: \"kubernetes.io/projected/37014874-153f-4c7b-a44e-5f31c8c0ba71-kube-api-access-mzghv\") pod \"37014874-153f-4c7b-a44e-5f31c8c0ba71\" (UID: \"37014874-153f-4c7b-a44e-5f31c8c0ba71\") " Jan 28 10:41:42 crc kubenswrapper[4735]: I0128 10:41:42.486637 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37014874-153f-4c7b-a44e-5f31c8c0ba71-host" (OuterVolumeSpecName: "host") pod "37014874-153f-4c7b-a44e-5f31c8c0ba71" (UID: "37014874-153f-4c7b-a44e-5f31c8c0ba71"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 10:41:42 crc kubenswrapper[4735]: I0128 10:41:42.486884 4735 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37014874-153f-4c7b-a44e-5f31c8c0ba71-host\") on node \"crc\" DevicePath \"\"" Jan 28 10:41:42 crc kubenswrapper[4735]: I0128 10:41:42.497634 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37014874-153f-4c7b-a44e-5f31c8c0ba71-kube-api-access-mzghv" (OuterVolumeSpecName: "kube-api-access-mzghv") pod "37014874-153f-4c7b-a44e-5f31c8c0ba71" (UID: "37014874-153f-4c7b-a44e-5f31c8c0ba71"). InnerVolumeSpecName "kube-api-access-mzghv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:41:42 crc kubenswrapper[4735]: I0128 10:41:42.588687 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzghv\" (UniqueName: \"kubernetes.io/projected/37014874-153f-4c7b-a44e-5f31c8c0ba71-kube-api-access-mzghv\") on node \"crc\" DevicePath \"\"" Jan 28 10:41:43 crc kubenswrapper[4735]: I0128 10:41:43.107860 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jg5dj/crc-debug-khhd6"] Jan 28 10:41:43 crc kubenswrapper[4735]: E0128 10:41:43.108391 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37014874-153f-4c7b-a44e-5f31c8c0ba71" containerName="container-00" Jan 28 10:41:43 crc kubenswrapper[4735]: I0128 10:41:43.108410 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="37014874-153f-4c7b-a44e-5f31c8c0ba71" containerName="container-00" Jan 28 10:41:43 crc kubenswrapper[4735]: I0128 10:41:43.108663 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="37014874-153f-4c7b-a44e-5f31c8c0ba71" containerName="container-00" Jan 28 10:41:43 crc kubenswrapper[4735]: I0128 10:41:43.109435 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg5dj/crc-debug-khhd6" Jan 28 10:41:43 crc kubenswrapper[4735]: I0128 10:41:43.295700 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee8a15a45c3ff6175e6ad929108f20b5ecfa2282b76e587f91b6a52583c40b78" Jan 28 10:41:43 crc kubenswrapper[4735]: I0128 10:41:43.295785 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg5dj/crc-debug-q8rkh" Jan 28 10:41:43 crc kubenswrapper[4735]: I0128 10:41:43.302103 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca266cf0-0b71-49e6-9a3d-5e16072909de-host\") pod \"crc-debug-khhd6\" (UID: \"ca266cf0-0b71-49e6-9a3d-5e16072909de\") " pod="openshift-must-gather-jg5dj/crc-debug-khhd6" Jan 28 10:41:43 crc kubenswrapper[4735]: I0128 10:41:43.302210 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4bn9\" (UniqueName: \"kubernetes.io/projected/ca266cf0-0b71-49e6-9a3d-5e16072909de-kube-api-access-x4bn9\") pod \"crc-debug-khhd6\" (UID: \"ca266cf0-0b71-49e6-9a3d-5e16072909de\") " pod="openshift-must-gather-jg5dj/crc-debug-khhd6" Jan 28 10:41:43 crc kubenswrapper[4735]: I0128 10:41:43.404816 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca266cf0-0b71-49e6-9a3d-5e16072909de-host\") pod \"crc-debug-khhd6\" (UID: \"ca266cf0-0b71-49e6-9a3d-5e16072909de\") " pod="openshift-must-gather-jg5dj/crc-debug-khhd6" Jan 28 10:41:43 crc kubenswrapper[4735]: I0128 10:41:43.405013 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca266cf0-0b71-49e6-9a3d-5e16072909de-host\") pod \"crc-debug-khhd6\" (UID: \"ca266cf0-0b71-49e6-9a3d-5e16072909de\") " pod="openshift-must-gather-jg5dj/crc-debug-khhd6" Jan 28 10:41:43 crc kubenswrapper[4735]: I0128 10:41:43.405454 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4bn9\" (UniqueName: \"kubernetes.io/projected/ca266cf0-0b71-49e6-9a3d-5e16072909de-kube-api-access-x4bn9\") pod \"crc-debug-khhd6\" (UID: \"ca266cf0-0b71-49e6-9a3d-5e16072909de\") " pod="openshift-must-gather-jg5dj/crc-debug-khhd6" Jan 28 10:41:43 crc kubenswrapper[4735]: I0128 10:41:43.438796 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4bn9\" (UniqueName: \"kubernetes.io/projected/ca266cf0-0b71-49e6-9a3d-5e16072909de-kube-api-access-x4bn9\") pod \"crc-debug-khhd6\" (UID: \"ca266cf0-0b71-49e6-9a3d-5e16072909de\") " pod="openshift-must-gather-jg5dj/crc-debug-khhd6" Jan 28 10:41:43 crc kubenswrapper[4735]: I0128 10:41:43.442956 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg5dj/crc-debug-khhd6" Jan 28 10:41:43 crc kubenswrapper[4735]: W0128 10:41:43.487179 4735 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca266cf0_0b71_49e6_9a3d_5e16072909de.slice/crio-dc6f80dc3f0fbf6d3586bbff0661f8b6004103c1e9bfd0004821e6608b1b8b35 WatchSource:0}: Error finding container dc6f80dc3f0fbf6d3586bbff0661f8b6004103c1e9bfd0004821e6608b1b8b35: Status 404 returned error can't find the container with id dc6f80dc3f0fbf6d3586bbff0661f8b6004103c1e9bfd0004821e6608b1b8b35 Jan 28 10:41:43 crc kubenswrapper[4735]: I0128 10:41:43.526255 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37014874-153f-4c7b-a44e-5f31c8c0ba71" path="/var/lib/kubelet/pods/37014874-153f-4c7b-a44e-5f31c8c0ba71/volumes" Jan 28 10:41:44 crc kubenswrapper[4735]: I0128 10:41:44.329629 4735 generic.go:334] "Generic (PLEG): container finished" podID="ca266cf0-0b71-49e6-9a3d-5e16072909de" containerID="4d6c56255c4a8318c84066b51a623ca9431ab06c5db944f550c785956c76d588" exitCode=0 Jan 28 10:41:44 crc kubenswrapper[4735]: I0128 10:41:44.329691 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg5dj/crc-debug-khhd6" event={"ID":"ca266cf0-0b71-49e6-9a3d-5e16072909de","Type":"ContainerDied","Data":"4d6c56255c4a8318c84066b51a623ca9431ab06c5db944f550c785956c76d588"} Jan 28 10:41:44 crc kubenswrapper[4735]: I0128 10:41:44.330975 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg5dj/crc-debug-khhd6" event={"ID":"ca266cf0-0b71-49e6-9a3d-5e16072909de","Type":"ContainerStarted","Data":"dc6f80dc3f0fbf6d3586bbff0661f8b6004103c1e9bfd0004821e6608b1b8b35"} Jan 28 10:41:44 crc kubenswrapper[4735]: I0128 10:41:44.383344 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jg5dj/crc-debug-khhd6"] Jan 28 10:41:44 crc kubenswrapper[4735]: I0128 10:41:44.394405 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jg5dj/crc-debug-khhd6"] Jan 28 10:41:45 crc kubenswrapper[4735]: I0128 10:41:45.436753 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg5dj/crc-debug-khhd6" Jan 28 10:41:45 crc kubenswrapper[4735]: I0128 10:41:45.455082 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca266cf0-0b71-49e6-9a3d-5e16072909de-host\") pod \"ca266cf0-0b71-49e6-9a3d-5e16072909de\" (UID: \"ca266cf0-0b71-49e6-9a3d-5e16072909de\") " Jan 28 10:41:45 crc kubenswrapper[4735]: I0128 10:41:45.455242 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca266cf0-0b71-49e6-9a3d-5e16072909de-host" (OuterVolumeSpecName: "host") pod "ca266cf0-0b71-49e6-9a3d-5e16072909de" (UID: "ca266cf0-0b71-49e6-9a3d-5e16072909de"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 10:41:45 crc kubenswrapper[4735]: I0128 10:41:45.455700 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4bn9\" (UniqueName: \"kubernetes.io/projected/ca266cf0-0b71-49e6-9a3d-5e16072909de-kube-api-access-x4bn9\") pod \"ca266cf0-0b71-49e6-9a3d-5e16072909de\" (UID: \"ca266cf0-0b71-49e6-9a3d-5e16072909de\") " Jan 28 10:41:45 crc kubenswrapper[4735]: I0128 10:41:45.456251 4735 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca266cf0-0b71-49e6-9a3d-5e16072909de-host\") on node \"crc\" DevicePath \"\"" Jan 28 10:41:45 crc kubenswrapper[4735]: I0128 10:41:45.463556 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca266cf0-0b71-49e6-9a3d-5e16072909de-kube-api-access-x4bn9" (OuterVolumeSpecName: "kube-api-access-x4bn9") pod "ca266cf0-0b71-49e6-9a3d-5e16072909de" (UID: "ca266cf0-0b71-49e6-9a3d-5e16072909de"). InnerVolumeSpecName "kube-api-access-x4bn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:41:45 crc kubenswrapper[4735]: I0128 10:41:45.513464 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca266cf0-0b71-49e6-9a3d-5e16072909de" path="/var/lib/kubelet/pods/ca266cf0-0b71-49e6-9a3d-5e16072909de/volumes" Jan 28 10:41:45 crc kubenswrapper[4735]: I0128 10:41:45.557830 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4bn9\" (UniqueName: \"kubernetes.io/projected/ca266cf0-0b71-49e6-9a3d-5e16072909de-kube-api-access-x4bn9\") on node \"crc\" DevicePath \"\"" Jan 28 10:41:46 crc kubenswrapper[4735]: I0128 10:41:46.351527 4735 scope.go:117] "RemoveContainer" containerID="4d6c56255c4a8318c84066b51a623ca9431ab06c5db944f550c785956c76d588" Jan 28 10:41:46 crc kubenswrapper[4735]: I0128 10:41:46.351633 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg5dj/crc-debug-khhd6" Jan 28 10:42:00 crc kubenswrapper[4735]: I0128 10:42:00.034925 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f67d6d798-h9vrm_f38494ad-ec5a-4426-9d2c-6ef3c6ad877b/barbican-api/0.log" Jan 28 10:42:00 crc kubenswrapper[4735]: I0128 10:42:00.189937 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f67d6d798-h9vrm_f38494ad-ec5a-4426-9d2c-6ef3c6ad877b/barbican-api-log/0.log" Jan 28 10:42:00 crc kubenswrapper[4735]: I0128 10:42:00.272475 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-74c4c54ddb-zk4nc_039cbbea-550c-4d71-96b2-6e06fcbc8916/barbican-keystone-listener/0.log" Jan 28 10:42:00 crc kubenswrapper[4735]: I0128 10:42:00.333495 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-74c4c54ddb-zk4nc_039cbbea-550c-4d71-96b2-6e06fcbc8916/barbican-keystone-listener-log/0.log" Jan 28 10:42:00 crc kubenswrapper[4735]: I0128 10:42:00.435882 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-79b69bb499-htdgz_83266f31-73f1-4b43-94c1-ceeb9ee128b3/barbican-worker/0.log" Jan 28 10:42:00 crc kubenswrapper[4735]: I0128 10:42:00.470158 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-79b69bb499-htdgz_83266f31-73f1-4b43-94c1-ceeb9ee128b3/barbican-worker-log/0.log" Jan 28 10:42:00 crc kubenswrapper[4735]: I0128 10:42:00.615745 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb_7d008e88-e9b6-4c2e-befa-76e9ba1fbadd/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:42:00 crc kubenswrapper[4735]: I0128 10:42:00.694018 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_923bb257-f04f-4939-ad8a-10c4e6cc8aec/ceilometer-central-agent/0.log" Jan 28 10:42:00 crc kubenswrapper[4735]: I0128 10:42:00.772917 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_923bb257-f04f-4939-ad8a-10c4e6cc8aec/ceilometer-notification-agent/0.log" Jan 28 10:42:00 crc kubenswrapper[4735]: I0128 10:42:00.801019 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_923bb257-f04f-4939-ad8a-10c4e6cc8aec/proxy-httpd/0.log" Jan 28 10:42:00 crc kubenswrapper[4735]: I0128 10:42:00.832796 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_923bb257-f04f-4939-ad8a-10c4e6cc8aec/sg-core/0.log" Jan 28 10:42:01 crc kubenswrapper[4735]: I0128 10:42:01.009405 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3fab6043-5ed8-4885-94a2-f26820e9609c/cinder-api/0.log" Jan 28 10:42:01 crc kubenswrapper[4735]: I0128 10:42:01.022525 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3fab6043-5ed8-4885-94a2-f26820e9609c/cinder-api-log/0.log" Jan 28 10:42:01 crc kubenswrapper[4735]: I0128 10:42:01.199952 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f68a4b62-3797-4888-b052-e4da32bc4dab/cinder-scheduler/0.log" Jan 28 10:42:01 crc kubenswrapper[4735]: I0128 10:42:01.220837 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f68a4b62-3797-4888-b052-e4da32bc4dab/probe/0.log" Jan 28 10:42:01 crc kubenswrapper[4735]: I0128 10:42:01.487692 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-vp29s_d981f997-629e-4604-bd7a-4ae05efbeb3f/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:42:01 crc kubenswrapper[4735]: I0128 10:42:01.562667 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-r8b22_f77b490a-f09b-4e92-a529-69d81c87478e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:42:01 crc kubenswrapper[4735]: I0128 10:42:01.665982 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-dpqvs_05f5d1ad-e9d0-4565-be27-0ad67a068056/init/0.log" Jan 28 10:42:01 crc kubenswrapper[4735]: I0128 10:42:01.874964 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-dpqvs_05f5d1ad-e9d0-4565-be27-0ad67a068056/init/0.log" Jan 28 10:42:01 crc kubenswrapper[4735]: I0128 10:42:01.912627 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-dpqvs_05f5d1ad-e9d0-4565-be27-0ad67a068056/dnsmasq-dns/0.log" Jan 28 10:42:01 crc kubenswrapper[4735]: I0128 10:42:01.934513 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-6kf42_cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:42:02 crc kubenswrapper[4735]: I0128 10:42:02.091872 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6/glance-log/0.log" Jan 28 10:42:02 crc kubenswrapper[4735]: I0128 10:42:02.107493 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6/glance-httpd/0.log" Jan 28 10:42:02 crc kubenswrapper[4735]: I0128 10:42:02.305858 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f1d08a0a-642a-404a-a47a-4f98f7d211b0/glance-log/0.log" Jan 28 10:42:02 crc kubenswrapper[4735]: I0128 10:42:02.307920 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f1d08a0a-642a-404a-a47a-4f98f7d211b0/glance-httpd/0.log" Jan 28 10:42:02 crc kubenswrapper[4735]: I0128 10:42:02.445639 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-55bcb7d5f8-xkjl6_9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79/horizon/0.log" Jan 28 10:42:02 crc kubenswrapper[4735]: I0128 10:42:02.642266 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-v46fq_e9acc786-77bc-4555-b7f4-f0a337175a3c/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:42:02 crc kubenswrapper[4735]: I0128 10:42:02.824125 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-55bcb7d5f8-xkjl6_9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79/horizon-log/0.log" Jan 28 10:42:02 crc kubenswrapper[4735]: I0128 10:42:02.867673 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-c9rml_3d0c7cb0-8a7d-4956-8cc0-c61510e64590/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:42:03 crc kubenswrapper[4735]: I0128 10:42:03.046823 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7b16556d-dd01-471a-a057-84ba1afcc8de/kube-state-metrics/0.log" Jan 28 10:42:03 crc kubenswrapper[4735]: I0128 10:42:03.074464 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-69fccdbfc9-k2hlg_fe8fc9bd-9646-437c-818b-d34f1b751e19/keystone-api/0.log" Jan 28 10:42:03 crc kubenswrapper[4735]: I0128 10:42:03.217770 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4_7c337230-3c7c-49aa-97cb-576327428c01/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:42:03 crc kubenswrapper[4735]: I0128 10:42:03.572342 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77978c49d9-227fz_585b2106-bf58-400c-8036-bd0220288f1f/neutron-httpd/0.log" Jan 28 10:42:03 crc kubenswrapper[4735]: I0128 10:42:03.632249 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77978c49d9-227fz_585b2106-bf58-400c-8036-bd0220288f1f/neutron-api/0.log" Jan 28 10:42:03 crc kubenswrapper[4735]: I0128 10:42:03.793470 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb_7202ad41-aa50-4012-a34d-df9d19e34a1d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:42:04 crc kubenswrapper[4735]: I0128 10:42:04.254551 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6c6bcae3-d25c-4076-91f5-cb8979303b25/nova-api-log/0.log" Jan 28 10:42:04 crc kubenswrapper[4735]: I0128 10:42:04.263462 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_dd58e731-439b-4ecf-b46a-2e978174ddf9/nova-cell0-conductor-conductor/0.log" Jan 28 10:42:04 crc kubenswrapper[4735]: I0128 10:42:04.465362 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6c6bcae3-d25c-4076-91f5-cb8979303b25/nova-api-api/0.log" Jan 28 10:42:04 crc kubenswrapper[4735]: I0128 10:42:04.534273 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_166e2c48-1a0f-4aad-bc51-26f82159a795/nova-cell1-conductor-conductor/0.log" Jan 28 10:42:04 crc kubenswrapper[4735]: I0128 10:42:04.575165 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_5d7ae116-a9d5-4dc3-bc01-3a8fb08af693/nova-cell1-novncproxy-novncproxy/0.log" Jan 28 10:42:04 crc kubenswrapper[4735]: I0128 10:42:04.717852 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-dzfkm_71c10727-e356-4d4a-a316-b48e9d5756bd/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:42:04 crc kubenswrapper[4735]: I0128 10:42:04.920817 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2f9188ec-6ee4-4120-8c9e-ab22d7a57546/nova-metadata-log/0.log" Jan 28 10:42:05 crc kubenswrapper[4735]: I0128 10:42:05.127474 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f3554759-653b-4a68-99e7-7f6cddcd5e5a/nova-scheduler-scheduler/0.log" Jan 28 10:42:05 crc kubenswrapper[4735]: I0128 10:42:05.184366 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_beb7a8ad-5283-42ff-85af-30eb94a8b291/mysql-bootstrap/0.log" Jan 28 10:42:05 crc kubenswrapper[4735]: I0128 10:42:05.360478 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_beb7a8ad-5283-42ff-85af-30eb94a8b291/mysql-bootstrap/0.log" Jan 28 10:42:05 crc kubenswrapper[4735]: I0128 10:42:05.429331 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:42:05 crc kubenswrapper[4735]: I0128 10:42:05.429617 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:42:05 crc kubenswrapper[4735]: I0128 10:42:05.431370 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_beb7a8ad-5283-42ff-85af-30eb94a8b291/galera/0.log" Jan 28 10:42:05 crc kubenswrapper[4735]: I0128 10:42:05.564947 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c58a2fe7-0339-42bb-96f9-084e2b9a3182/mysql-bootstrap/0.log" Jan 28 10:42:05 crc kubenswrapper[4735]: I0128 10:42:05.762113 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c58a2fe7-0339-42bb-96f9-084e2b9a3182/galera/0.log" Jan 28 10:42:05 crc kubenswrapper[4735]: I0128 10:42:05.792310 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c58a2fe7-0339-42bb-96f9-084e2b9a3182/mysql-bootstrap/0.log" Jan 28 10:42:06 crc kubenswrapper[4735]: I0128 10:42:06.002181 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-g77s5_d10ea80e-de8b-48b7-a31c-5e8941549930/openstack-network-exporter/0.log" Jan 28 10:42:06 crc kubenswrapper[4735]: I0128 10:42:06.004039 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9360da99-a521-416d-868b-69526589a422/openstackclient/0.log" Jan 28 10:42:06 crc kubenswrapper[4735]: I0128 10:42:06.048480 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2f9188ec-6ee4-4120-8c9e-ab22d7a57546/nova-metadata-metadata/0.log" Jan 28 10:42:06 crc kubenswrapper[4735]: I0128 10:42:06.244683 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lh9fz_f7a32b8d-eace-4a5f-b187-baf0e7d67479/ovsdb-server-init/0.log" Jan 28 10:42:06 crc kubenswrapper[4735]: I0128 10:42:06.374519 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lh9fz_f7a32b8d-eace-4a5f-b187-baf0e7d67479/ovs-vswitchd/0.log" Jan 28 10:42:06 crc kubenswrapper[4735]: I0128 10:42:06.402282 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lh9fz_f7a32b8d-eace-4a5f-b187-baf0e7d67479/ovsdb-server-init/0.log" Jan 28 10:42:06 crc kubenswrapper[4735]: I0128 10:42:06.417399 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lh9fz_f7a32b8d-eace-4a5f-b187-baf0e7d67479/ovsdb-server/0.log" Jan 28 10:42:06 crc kubenswrapper[4735]: I0128 10:42:06.556324 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-rdcgm_21c583b0-22fd-4ccf-907a-1bb119119830/ovn-controller/0.log" Jan 28 10:42:06 crc kubenswrapper[4735]: I0128 10:42:06.645580 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-4l92q_bbd83f26-98f3-47a9-9fa0-7225ba6b97ac/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:42:06 crc kubenswrapper[4735]: I0128 10:42:06.742714 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_09a13939-6aac-486c-a541-5c3683bff770/openstack-network-exporter/0.log" Jan 28 10:42:06 crc kubenswrapper[4735]: I0128 10:42:06.866024 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_09a13939-6aac-486c-a541-5c3683bff770/ovn-northd/0.log" Jan 28 10:42:06 crc kubenswrapper[4735]: I0128 10:42:06.961183 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4362d077-82db-43b3-8df1-b60d2d34028c/openstack-network-exporter/0.log" Jan 28 10:42:06 crc kubenswrapper[4735]: I0128 10:42:06.973127 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4362d077-82db-43b3-8df1-b60d2d34028c/ovsdbserver-nb/0.log" Jan 28 10:42:07 crc kubenswrapper[4735]: I0128 10:42:07.172246 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_dc607c73-caa4-4986-90b6-1b101b32970b/openstack-network-exporter/0.log" Jan 28 10:42:07 crc kubenswrapper[4735]: I0128 10:42:07.242439 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_dc607c73-caa4-4986-90b6-1b101b32970b/ovsdbserver-sb/0.log" Jan 28 10:42:07 crc kubenswrapper[4735]: I0128 10:42:07.429886 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7bbdbbfb48-tj5mr_23c7392e-2cdc-4e9a-9a25-9f867a675d48/placement-log/0.log" Jan 28 10:42:07 crc kubenswrapper[4735]: I0128 10:42:07.465724 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7bbdbbfb48-tj5mr_23c7392e-2cdc-4e9a-9a25-9f867a675d48/placement-api/0.log" Jan 28 10:42:07 crc kubenswrapper[4735]: I0128 10:42:07.540593 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5514390d-3d1c-45af-b063-0ba631c09e43/setup-container/0.log" Jan 28 10:42:07 crc kubenswrapper[4735]: I0128 10:42:07.696063 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5514390d-3d1c-45af-b063-0ba631c09e43/setup-container/0.log" Jan 28 10:42:07 crc kubenswrapper[4735]: I0128 10:42:07.745553 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5514390d-3d1c-45af-b063-0ba631c09e43/rabbitmq/0.log" Jan 28 10:42:07 crc kubenswrapper[4735]: I0128 10:42:07.796789 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bbefc8b9-3deb-4ae1-a798-1e95f1014380/setup-container/0.log" Jan 28 10:42:08 crc kubenswrapper[4735]: I0128 10:42:08.041385 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bbefc8b9-3deb-4ae1-a798-1e95f1014380/setup-container/0.log" Jan 28 10:42:08 crc kubenswrapper[4735]: I0128 10:42:08.046411 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bbefc8b9-3deb-4ae1-a798-1e95f1014380/rabbitmq/0.log" Jan 28 10:42:08 crc kubenswrapper[4735]: I0128 10:42:08.114207 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc_fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:42:08 crc kubenswrapper[4735]: I0128 10:42:08.283004 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-l4f99_84ad913f-0b4f-4644-8aeb-b85bfc1da4ee/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:42:08 crc kubenswrapper[4735]: I0128 10:42:08.349490 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz_1b4ed502-7077-42e3-b1a5-a70f4d8c2aab/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:42:08 crc kubenswrapper[4735]: I0128 10:42:08.468152 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-p7fn9_2e5e3a3d-adbc-42b9-a436-a3b565cfdb70/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:42:08 crc kubenswrapper[4735]: I0128 10:42:08.586613 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-5tc5h_812becf2-d241-43e6-b9e4-58263bbfb115/ssh-known-hosts-edpm-deployment/0.log" Jan 28 10:42:08 crc kubenswrapper[4735]: I0128 10:42:08.823045 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-79b9899549-8qxf5_681a40ab-fe34-4825-8336-3b8048fc04b7/proxy-server/0.log" Jan 28 10:42:08 crc kubenswrapper[4735]: I0128 10:42:08.894247 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-h9q76_8f9e278a-14f9-4162-acea-69717d616573/swift-ring-rebalance/0.log" Jan 28 10:42:08 crc kubenswrapper[4735]: I0128 10:42:08.915666 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-79b9899549-8qxf5_681a40ab-fe34-4825-8336-3b8048fc04b7/proxy-httpd/0.log" Jan 28 10:42:09 crc kubenswrapper[4735]: I0128 10:42:09.064236 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/account-auditor/0.log" Jan 28 10:42:09 crc kubenswrapper[4735]: I0128 10:42:09.128618 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/account-replicator/0.log" Jan 28 10:42:09 crc kubenswrapper[4735]: I0128 10:42:09.136802 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/account-reaper/0.log" Jan 28 10:42:09 crc kubenswrapper[4735]: I0128 10:42:09.244973 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/account-server/0.log" Jan 28 10:42:09 crc kubenswrapper[4735]: I0128 10:42:09.350202 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/container-auditor/0.log" Jan 28 10:42:09 crc kubenswrapper[4735]: I0128 10:42:09.387900 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/container-server/0.log" Jan 28 10:42:09 crc kubenswrapper[4735]: I0128 10:42:09.388839 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/container-replicator/0.log" Jan 28 10:42:09 crc kubenswrapper[4735]: I0128 10:42:09.456076 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/container-updater/0.log" Jan 28 10:42:09 crc kubenswrapper[4735]: I0128 10:42:09.524715 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/object-auditor/0.log" Jan 28 10:42:09 crc kubenswrapper[4735]: I0128 10:42:09.589321 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/object-expirer/0.log" Jan 28 10:42:09 crc kubenswrapper[4735]: I0128 10:42:09.638474 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/object-server/0.log" Jan 28 10:42:09 crc kubenswrapper[4735]: I0128 10:42:09.652454 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/object-replicator/0.log" Jan 28 10:42:09 crc kubenswrapper[4735]: I0128 10:42:09.733185 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/object-updater/0.log" Jan 28 10:42:09 crc kubenswrapper[4735]: I0128 10:42:09.797000 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/rsync/0.log" Jan 28 10:42:09 crc kubenswrapper[4735]: I0128 10:42:09.823973 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/swift-recon-cron/0.log" Jan 28 10:42:10 crc kubenswrapper[4735]: I0128 10:42:10.005778 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-f24zk_05947d1f-31c3-4046-b5e5-c70249faa781/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:42:10 crc kubenswrapper[4735]: I0128 10:42:10.050912 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d5faba03-f0db-452a-bf0c-271d930e27b5/tempest-tests-tempest-tests-runner/0.log" Jan 28 10:42:10 crc kubenswrapper[4735]: I0128 10:42:10.262927 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_1d4b1627-ee2b-44ec-b1e8-4909d05f4d03/test-operator-logs-container/0.log" Jan 28 10:42:10 crc kubenswrapper[4735]: I0128 10:42:10.263768 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8_eca0937f-f5d2-4682-9ac0-92041c4fac82/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:42:18 crc kubenswrapper[4735]: I0128 10:42:18.955080 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_c478e32a-3e57-49eb-8bd0-cff353ad0f56/memcached/0.log" Jan 28 10:42:34 crc kubenswrapper[4735]: I0128 10:42:34.654801 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf_5ec63835-f891-4098-a2b0-348948b4a3ec/util/0.log" Jan 28 10:42:34 crc kubenswrapper[4735]: I0128 10:42:34.825017 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf_5ec63835-f891-4098-a2b0-348948b4a3ec/util/0.log" Jan 28 10:42:34 crc kubenswrapper[4735]: I0128 10:42:34.839564 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf_5ec63835-f891-4098-a2b0-348948b4a3ec/pull/0.log" Jan 28 10:42:34 crc kubenswrapper[4735]: I0128 10:42:34.880969 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf_5ec63835-f891-4098-a2b0-348948b4a3ec/pull/0.log" Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.021065 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf_5ec63835-f891-4098-a2b0-348948b4a3ec/extract/0.log" Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.026499 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf_5ec63835-f891-4098-a2b0-348948b4a3ec/util/0.log" Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.033235 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf_5ec63835-f891-4098-a2b0-348948b4a3ec/pull/0.log" Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.253827 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-65ff799cfd-vxtqg_60a1f78e-f6c9-4335-8545-533014df520c/manager/0.log" Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.273532 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-pxbwq_4eee1f38-6076-431a-83c4-f6ac2c91bf2b/manager/0.log" Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.386517 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-vlxzd_ad40699a-df4d-4aef-ba9e-ec411784138f/manager/0.log" Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.429557 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.429670 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.429728 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.430611 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d"} pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.430709 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" containerID="cri-o://6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" gracePeriod=600 Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.526951 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-5qn45_23c42560-8419-496c-8b21-ebb1315a7e4d/manager/0.log" Jan 28 10:42:35 crc kubenswrapper[4735]: E0128 10:42:35.550182 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.638321 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-575ffb885b-77khw_ab02d2a5-6cf8-4f01-8381-2ea58829a846/manager/0.log" Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.743833 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-zwpzq_9ada3a05-5552-4191-ac59-bbeeb2621780/manager/0.log" Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.840717 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" exitCode=0 Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.840773 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerDied","Data":"6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d"} Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.840806 4735 scope.go:117] "RemoveContainer" containerID="9b8984715e78afe1b7f7f353614e464bb80994a6fa2fba665bf31de6a5c08139" Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.841517 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:42:35 crc kubenswrapper[4735]: E0128 10:42:35.841821 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.950076 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-mz4t7_8139f6ed-d10a-4966-a7d2-a5a9412bd235/manager/0.log" Jan 28 10:42:35 crc kubenswrapper[4735]: I0128 10:42:35.991781 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-gtbjm_75e33671-8cc3-4470-845b-b45c777c3e9b/manager/0.log" Jan 28 10:42:36 crc kubenswrapper[4735]: I0128 10:42:36.215775 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-b2ths_fa2276dc-647b-427e-9ff5-c25f1d351ee6/manager/0.log" Jan 28 10:42:36 crc kubenswrapper[4735]: I0128 10:42:36.224081 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-lt6sx_dbfb38f5-9834-44f5-a84d-f17bb28538f6/manager/0.log" Jan 28 10:42:36 crc kubenswrapper[4735]: I0128 10:42:36.383330 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr_7ee4898b-8413-497c-b0f8-6003e75d1bd4/manager/0.log" Jan 28 10:42:36 crc kubenswrapper[4735]: I0128 10:42:36.463118 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-xds2t_99934609-5407-442a-a887-ba47b516506a/manager/0.log" Jan 28 10:42:36 crc kubenswrapper[4735]: I0128 10:42:36.638356 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-94dd99d7d-6pbvz_6300756c-e96b-40d5-ab3d-a044cb39016b/manager/0.log" Jan 28 10:42:36 crc kubenswrapper[4735]: I0128 10:42:36.684351 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-ddcbfd695-fp94q_d3cf4444-7ac3-4c53-9b08-f2ad2d5a7c17/manager/0.log" Jan 28 10:42:36 crc kubenswrapper[4735]: I0128 10:42:36.818136 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp_49ce66b9-31db-4d8d-adc4-f7e178cdceec/manager/0.log" Jan 28 10:42:36 crc kubenswrapper[4735]: I0128 10:42:36.963470 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-745854f465-rgjxp_eb187ad2-0ae5-41e5-b649-4a55cdfb6aef/operator/0.log" Jan 28 10:42:37 crc kubenswrapper[4735]: I0128 10:42:37.233959 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nmkt5_f6eacb6b-9066-44c5-a483-02274a4e3884/registry-server/0.log" Jan 28 10:42:37 crc kubenswrapper[4735]: I0128 10:42:37.370918 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-xvd9h_c0301ee4-c19c-425e-9ba4-f66e2b405835/manager/0.log" Jan 28 10:42:37 crc kubenswrapper[4735]: I0128 10:42:37.486062 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-8svwm_b91503a2-0819-4923-a8c2-63f35bded503/manager/0.log" Jan 28 10:42:37 crc kubenswrapper[4735]: I0128 10:42:37.751810 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-jbc6s_d401e4f7-afe6-4095-86cc-f43f43130c4b/operator/0.log" Jan 28 10:42:37 crc kubenswrapper[4735]: I0128 10:42:37.846366 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-2hkvv_24c4a2af-7bb3-4a12-8279-58f7dda0ef7e/manager/0.log" Jan 28 10:42:38 crc kubenswrapper[4735]: I0128 10:42:38.039469 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-bcnkf_351faed5-c166-4766-99af-3c5b728b1f78/manager/0.log" Jan 28 10:42:38 crc kubenswrapper[4735]: I0128 10:42:38.168846 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-pcp57_4699fc29-624c-482c-9e68-e71bdf957701/manager/0.log" Jan 28 10:42:38 crc kubenswrapper[4735]: I0128 10:42:38.279349 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7f5f655bfc-9jbp7_77910ebb-c49c-432e-b224-4fbafcaf7853/manager/0.log" Jan 28 10:42:38 crc kubenswrapper[4735]: I0128 10:42:38.360864 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-767b8bc766-cmtww_733ac7cb-682a-4a00-b6eb-404b3010295c/manager/0.log" Jan 28 10:42:50 crc kubenswrapper[4735]: I0128 10:42:50.502664 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:42:50 crc kubenswrapper[4735]: E0128 10:42:50.503576 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:42:57 crc kubenswrapper[4735]: I0128 10:42:57.525658 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-fzvkk_432a29bc-0436-4d07-bb40-25bb08020746/control-plane-machine-set-operator/0.log" Jan 28 10:42:57 crc kubenswrapper[4735]: I0128 10:42:57.733237 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wpjcc_c5d19be7-214e-4428-835a-5c325772fe6e/kube-rbac-proxy/0.log" Jan 28 10:42:57 crc kubenswrapper[4735]: I0128 10:42:57.819151 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wpjcc_c5d19be7-214e-4428-835a-5c325772fe6e/machine-api-operator/0.log" Jan 28 10:43:02 crc kubenswrapper[4735]: I0128 10:43:02.503361 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:43:02 crc kubenswrapper[4735]: E0128 10:43:02.504452 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:43:10 crc kubenswrapper[4735]: I0128 10:43:10.640056 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-hw27n_cb9ea2bc-7e7e-4b81-ae1e-64ce3ec17177/cert-manager-controller/0.log" Jan 28 10:43:10 crc kubenswrapper[4735]: I0128 10:43:10.831380 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-gbqg5_6f21432c-a03e-4600-8c86-65159cf299ba/cert-manager-cainjector/0.log" Jan 28 10:43:10 crc kubenswrapper[4735]: I0128 10:43:10.912828 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-zxd5m_6d0afe1e-f921-43c9-8f44-3498f0f45b85/cert-manager-webhook/0.log" Jan 28 10:43:15 crc kubenswrapper[4735]: I0128 10:43:15.502969 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:43:15 crc kubenswrapper[4735]: E0128 10:43:15.503655 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:43:23 crc kubenswrapper[4735]: I0128 10:43:23.502849 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-grbj7_5a554582-3291-4541-919b-b58a23e818df/nmstate-console-plugin/0.log" Jan 28 10:43:23 crc kubenswrapper[4735]: I0128 10:43:23.723165 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-7pft9_8d7d44ef-aa10-45a5-9ee3-a61c6178fb52/nmstate-handler/0.log" Jan 28 10:43:23 crc kubenswrapper[4735]: I0128 10:43:23.908451 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-v9d5m_855f8b5d-befc-4d5d-9653-c9820ca8eafd/kube-rbac-proxy/0.log" Jan 28 10:43:24 crc kubenswrapper[4735]: I0128 10:43:24.017268 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-v9d5m_855f8b5d-befc-4d5d-9653-c9820ca8eafd/nmstate-metrics/0.log" Jan 28 10:43:24 crc kubenswrapper[4735]: I0128 10:43:24.077063 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-7nsxc_d505ac14-ad3f-45fc-a07c-c10acaeb03d2/nmstate-operator/0.log" Jan 28 10:43:24 crc kubenswrapper[4735]: I0128 10:43:24.216969 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-h28tw_19bcd978-0715-474c-8abf-6028efe34478/nmstate-webhook/0.log" Jan 28 10:43:28 crc kubenswrapper[4735]: I0128 10:43:28.505182 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:43:28 crc kubenswrapper[4735]: E0128 10:43:28.506041 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:43:41 crc kubenswrapper[4735]: I0128 10:43:41.503642 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:43:41 crc kubenswrapper[4735]: E0128 10:43:41.504954 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:43:52 crc kubenswrapper[4735]: I0128 10:43:52.557139 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lsbrt_4cf87a70-49c9-4cb3-a2be-afd200ca892b/kube-rbac-proxy/0.log" Jan 28 10:43:52 crc kubenswrapper[4735]: I0128 10:43:52.697071 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lsbrt_4cf87a70-49c9-4cb3-a2be-afd200ca892b/controller/0.log" Jan 28 10:43:52 crc kubenswrapper[4735]: I0128 10:43:52.825895 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-frr-files/0.log" Jan 28 10:43:52 crc kubenswrapper[4735]: I0128 10:43:52.997232 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-frr-files/0.log" Jan 28 10:43:53 crc kubenswrapper[4735]: I0128 10:43:53.048741 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-reloader/0.log" Jan 28 10:43:53 crc kubenswrapper[4735]: I0128 10:43:53.054654 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-metrics/0.log" Jan 28 10:43:53 crc kubenswrapper[4735]: I0128 10:43:53.055039 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-reloader/0.log" Jan 28 10:43:53 crc kubenswrapper[4735]: I0128 10:43:53.247181 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-frr-files/0.log" Jan 28 10:43:53 crc kubenswrapper[4735]: I0128 10:43:53.297432 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-metrics/0.log" Jan 28 10:43:53 crc kubenswrapper[4735]: I0128 10:43:53.310343 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-reloader/0.log" Jan 28 10:43:53 crc kubenswrapper[4735]: I0128 10:43:53.321936 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-metrics/0.log" Jan 28 10:43:53 crc kubenswrapper[4735]: I0128 10:43:53.508114 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-reloader/0.log" Jan 28 10:43:53 crc kubenswrapper[4735]: I0128 10:43:53.520210 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/controller/0.log" Jan 28 10:43:53 crc kubenswrapper[4735]: I0128 10:43:53.531293 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-metrics/0.log" Jan 28 10:43:53 crc kubenswrapper[4735]: I0128 10:43:53.540102 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-frr-files/0.log" Jan 28 10:43:53 crc kubenswrapper[4735]: I0128 10:43:53.732303 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/kube-rbac-proxy-frr/0.log" Jan 28 10:43:53 crc kubenswrapper[4735]: I0128 10:43:53.738136 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/kube-rbac-proxy/0.log" Jan 28 10:43:53 crc kubenswrapper[4735]: I0128 10:43:53.751220 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/frr-metrics/0.log" Jan 28 10:43:53 crc kubenswrapper[4735]: I0128 10:43:53.938444 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/reloader/0.log" Jan 28 10:43:53 crc kubenswrapper[4735]: I0128 10:43:53.975180 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-qk849_0042167e-002b-4fa7-8ff3-161e1326fae1/frr-k8s-webhook-server/0.log" Jan 28 10:43:54 crc kubenswrapper[4735]: I0128 10:43:54.421775 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-54979c8865-g4cx7_8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad/manager/0.log" Jan 28 10:43:54 crc kubenswrapper[4735]: I0128 10:43:54.507175 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-cb99f596-8vf2t_3da5666b-ceb3-488f-ac22-1f25ba7cea7f/webhook-server/0.log" Jan 28 10:43:54 crc kubenswrapper[4735]: I0128 10:43:54.715363 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wkrfv_f582a362-710c-4f62-988c-e09f4f34da10/kube-rbac-proxy/0.log" Jan 28 10:43:55 crc kubenswrapper[4735]: I0128 10:43:55.001789 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/frr/0.log" Jan 28 10:43:55 crc kubenswrapper[4735]: I0128 10:43:55.115039 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wkrfv_f582a362-710c-4f62-988c-e09f4f34da10/speaker/0.log" Jan 28 10:43:56 crc kubenswrapper[4735]: I0128 10:43:56.502882 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:43:56 crc kubenswrapper[4735]: E0128 10:43:56.503275 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:44:08 crc kubenswrapper[4735]: I0128 10:44:08.454973 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46_54d4636f-758a-41b9-9491-69cb7f38afbc/util/0.log" Jan 28 10:44:08 crc kubenswrapper[4735]: I0128 10:44:08.695708 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46_54d4636f-758a-41b9-9491-69cb7f38afbc/util/0.log" Jan 28 10:44:08 crc kubenswrapper[4735]: I0128 10:44:08.708769 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46_54d4636f-758a-41b9-9491-69cb7f38afbc/pull/0.log" Jan 28 10:44:08 crc kubenswrapper[4735]: I0128 10:44:08.725051 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46_54d4636f-758a-41b9-9491-69cb7f38afbc/pull/0.log" Jan 28 10:44:08 crc kubenswrapper[4735]: I0128 10:44:08.893649 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46_54d4636f-758a-41b9-9491-69cb7f38afbc/util/0.log" Jan 28 10:44:08 crc kubenswrapper[4735]: I0128 10:44:08.939283 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46_54d4636f-758a-41b9-9491-69cb7f38afbc/pull/0.log" Jan 28 10:44:08 crc kubenswrapper[4735]: I0128 10:44:08.942738 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46_54d4636f-758a-41b9-9491-69cb7f38afbc/extract/0.log" Jan 28 10:44:09 crc kubenswrapper[4735]: I0128 10:44:09.088284 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb_03e770c2-1137-4e04-9775-a1187be4f65d/util/0.log" Jan 28 10:44:09 crc kubenswrapper[4735]: I0128 10:44:09.246637 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb_03e770c2-1137-4e04-9775-a1187be4f65d/pull/0.log" Jan 28 10:44:09 crc kubenswrapper[4735]: I0128 10:44:09.268677 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb_03e770c2-1137-4e04-9775-a1187be4f65d/util/0.log" Jan 28 10:44:09 crc kubenswrapper[4735]: I0128 10:44:09.292204 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb_03e770c2-1137-4e04-9775-a1187be4f65d/pull/0.log" Jan 28 10:44:09 crc kubenswrapper[4735]: I0128 10:44:09.481388 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb_03e770c2-1137-4e04-9775-a1187be4f65d/util/0.log" Jan 28 10:44:09 crc kubenswrapper[4735]: I0128 10:44:09.482859 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb_03e770c2-1137-4e04-9775-a1187be4f65d/extract/0.log" Jan 28 10:44:09 crc kubenswrapper[4735]: I0128 10:44:09.491651 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb_03e770c2-1137-4e04-9775-a1187be4f65d/pull/0.log" Jan 28 10:44:09 crc kubenswrapper[4735]: I0128 10:44:09.705190 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x2lhf_578b28be-496b-4970-91ff-75695e48b20f/extract-utilities/0.log" Jan 28 10:44:09 crc kubenswrapper[4735]: I0128 10:44:09.815826 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x2lhf_578b28be-496b-4970-91ff-75695e48b20f/extract-content/0.log" Jan 28 10:44:09 crc kubenswrapper[4735]: I0128 10:44:09.865513 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x2lhf_578b28be-496b-4970-91ff-75695e48b20f/extract-content/0.log" Jan 28 10:44:09 crc kubenswrapper[4735]: I0128 10:44:09.876318 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x2lhf_578b28be-496b-4970-91ff-75695e48b20f/extract-utilities/0.log" Jan 28 10:44:10 crc kubenswrapper[4735]: I0128 10:44:10.073245 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x2lhf_578b28be-496b-4970-91ff-75695e48b20f/extract-content/0.log" Jan 28 10:44:10 crc kubenswrapper[4735]: I0128 10:44:10.085880 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x2lhf_578b28be-496b-4970-91ff-75695e48b20f/extract-utilities/0.log" Jan 28 10:44:10 crc kubenswrapper[4735]: I0128 10:44:10.273408 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-swb69_0ed89506-d111-4bdd-b933-d2325e4b897f/extract-utilities/0.log" Jan 28 10:44:10 crc kubenswrapper[4735]: I0128 10:44:10.509573 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x2lhf_578b28be-496b-4970-91ff-75695e48b20f/registry-server/0.log" Jan 28 10:44:10 crc kubenswrapper[4735]: I0128 10:44:10.568565 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-swb69_0ed89506-d111-4bdd-b933-d2325e4b897f/extract-utilities/0.log" Jan 28 10:44:10 crc kubenswrapper[4735]: I0128 10:44:10.633606 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-swb69_0ed89506-d111-4bdd-b933-d2325e4b897f/extract-content/0.log" Jan 28 10:44:10 crc kubenswrapper[4735]: I0128 10:44:10.658028 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-swb69_0ed89506-d111-4bdd-b933-d2325e4b897f/extract-content/0.log" Jan 28 10:44:10 crc kubenswrapper[4735]: I0128 10:44:10.712708 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-swb69_0ed89506-d111-4bdd-b933-d2325e4b897f/extract-utilities/0.log" Jan 28 10:44:10 crc kubenswrapper[4735]: I0128 10:44:10.774923 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-swb69_0ed89506-d111-4bdd-b933-d2325e4b897f/extract-content/0.log" Jan 28 10:44:10 crc kubenswrapper[4735]: I0128 10:44:10.879176 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mrxf2_761659b6-1552-4fd3-8e5a-10bf29a772b3/marketplace-operator/0.log" Jan 28 10:44:11 crc kubenswrapper[4735]: I0128 10:44:11.123380 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s94c6_f435b834-ada9-4e43-be5a-d7d137c0dda5/extract-utilities/0.log" Jan 28 10:44:11 crc kubenswrapper[4735]: I0128 10:44:11.275457 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-swb69_0ed89506-d111-4bdd-b933-d2325e4b897f/registry-server/0.log" Jan 28 10:44:11 crc kubenswrapper[4735]: I0128 10:44:11.315712 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s94c6_f435b834-ada9-4e43-be5a-d7d137c0dda5/extract-content/0.log" Jan 28 10:44:11 crc kubenswrapper[4735]: I0128 10:44:11.321629 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s94c6_f435b834-ada9-4e43-be5a-d7d137c0dda5/extract-utilities/0.log" Jan 28 10:44:11 crc kubenswrapper[4735]: I0128 10:44:11.383387 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s94c6_f435b834-ada9-4e43-be5a-d7d137c0dda5/extract-content/0.log" Jan 28 10:44:11 crc kubenswrapper[4735]: I0128 10:44:11.502614 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:44:11 crc kubenswrapper[4735]: E0128 10:44:11.503091 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:44:11 crc kubenswrapper[4735]: I0128 10:44:11.618188 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s94c6_f435b834-ada9-4e43-be5a-d7d137c0dda5/extract-content/0.log" Jan 28 10:44:11 crc kubenswrapper[4735]: I0128 10:44:11.677244 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s94c6_f435b834-ada9-4e43-be5a-d7d137c0dda5/registry-server/0.log" Jan 28 10:44:11 crc kubenswrapper[4735]: I0128 10:44:11.784174 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s94c6_f435b834-ada9-4e43-be5a-d7d137c0dda5/extract-utilities/0.log" Jan 28 10:44:11 crc kubenswrapper[4735]: I0128 10:44:11.910700 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sfgcd_47112b11-5b61-4d2c-8c6b-059b673f74c2/extract-utilities/0.log" Jan 28 10:44:12 crc kubenswrapper[4735]: I0128 10:44:12.131670 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sfgcd_47112b11-5b61-4d2c-8c6b-059b673f74c2/extract-content/0.log" Jan 28 10:44:12 crc kubenswrapper[4735]: I0128 10:44:12.146941 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sfgcd_47112b11-5b61-4d2c-8c6b-059b673f74c2/extract-content/0.log" Jan 28 10:44:12 crc kubenswrapper[4735]: I0128 10:44:12.190262 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sfgcd_47112b11-5b61-4d2c-8c6b-059b673f74c2/extract-utilities/0.log" Jan 28 10:44:12 crc kubenswrapper[4735]: I0128 10:44:12.393972 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sfgcd_47112b11-5b61-4d2c-8c6b-059b673f74c2/extract-utilities/0.log" Jan 28 10:44:12 crc kubenswrapper[4735]: I0128 10:44:12.413650 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sfgcd_47112b11-5b61-4d2c-8c6b-059b673f74c2/extract-content/0.log" Jan 28 10:44:12 crc kubenswrapper[4735]: I0128 10:44:12.932372 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sfgcd_47112b11-5b61-4d2c-8c6b-059b673f74c2/registry-server/0.log" Jan 28 10:44:26 crc kubenswrapper[4735]: I0128 10:44:26.503559 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:44:26 crc kubenswrapper[4735]: E0128 10:44:26.504414 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:44:37 crc kubenswrapper[4735]: I0128 10:44:37.509931 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:44:37 crc kubenswrapper[4735]: E0128 10:44:37.510877 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:44:49 crc kubenswrapper[4735]: I0128 10:44:49.502981 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:44:49 crc kubenswrapper[4735]: E0128 10:44:49.504806 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.183689 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw"] Jan 28 10:45:00 crc kubenswrapper[4735]: E0128 10:45:00.184997 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca266cf0-0b71-49e6-9a3d-5e16072909de" containerName="container-00" Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.185020 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca266cf0-0b71-49e6-9a3d-5e16072909de" containerName="container-00" Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.185356 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca266cf0-0b71-49e6-9a3d-5e16072909de" containerName="container-00" Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.186380 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw" Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.190915 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.190945 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.196862 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw"] Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.315602 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/814a4c44-26c8-4260-8e94-862dac905ed5-secret-volume\") pod \"collect-profiles-29493285-wmgnw\" (UID: \"814a4c44-26c8-4260-8e94-862dac905ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw" Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.315697 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fvmf\" (UniqueName: \"kubernetes.io/projected/814a4c44-26c8-4260-8e94-862dac905ed5-kube-api-access-4fvmf\") pod \"collect-profiles-29493285-wmgnw\" (UID: \"814a4c44-26c8-4260-8e94-862dac905ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw" Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.315794 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/814a4c44-26c8-4260-8e94-862dac905ed5-config-volume\") pod \"collect-profiles-29493285-wmgnw\" (UID: \"814a4c44-26c8-4260-8e94-862dac905ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw" Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.418451 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/814a4c44-26c8-4260-8e94-862dac905ed5-secret-volume\") pod \"collect-profiles-29493285-wmgnw\" (UID: \"814a4c44-26c8-4260-8e94-862dac905ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw" Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.418626 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fvmf\" (UniqueName: \"kubernetes.io/projected/814a4c44-26c8-4260-8e94-862dac905ed5-kube-api-access-4fvmf\") pod \"collect-profiles-29493285-wmgnw\" (UID: \"814a4c44-26c8-4260-8e94-862dac905ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw" Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.418823 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/814a4c44-26c8-4260-8e94-862dac905ed5-config-volume\") pod \"collect-profiles-29493285-wmgnw\" (UID: \"814a4c44-26c8-4260-8e94-862dac905ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw" Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.419735 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/814a4c44-26c8-4260-8e94-862dac905ed5-config-volume\") pod \"collect-profiles-29493285-wmgnw\" (UID: \"814a4c44-26c8-4260-8e94-862dac905ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw" Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.435639 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/814a4c44-26c8-4260-8e94-862dac905ed5-secret-volume\") pod \"collect-profiles-29493285-wmgnw\" (UID: \"814a4c44-26c8-4260-8e94-862dac905ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw" Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.441643 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fvmf\" (UniqueName: \"kubernetes.io/projected/814a4c44-26c8-4260-8e94-862dac905ed5-kube-api-access-4fvmf\") pod \"collect-profiles-29493285-wmgnw\" (UID: \"814a4c44-26c8-4260-8e94-862dac905ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw" Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.522952 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw" Jan 28 10:45:00 crc kubenswrapper[4735]: I0128 10:45:00.996131 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw"] Jan 28 10:45:01 crc kubenswrapper[4735]: I0128 10:45:01.151924 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw" event={"ID":"814a4c44-26c8-4260-8e94-862dac905ed5","Type":"ContainerStarted","Data":"582f83a4fcdf054d89b85c3c99fce5717d065c64b1cd39de57c775c68392c538"} Jan 28 10:45:02 crc kubenswrapper[4735]: I0128 10:45:02.173149 4735 generic.go:334] "Generic (PLEG): container finished" podID="814a4c44-26c8-4260-8e94-862dac905ed5" containerID="2dcad061142c1654e097c6f9a2014f3880b4290dd4070d07c89aaf2c08d80ea8" exitCode=0 Jan 28 10:45:02 crc kubenswrapper[4735]: I0128 10:45:02.173469 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw" event={"ID":"814a4c44-26c8-4260-8e94-862dac905ed5","Type":"ContainerDied","Data":"2dcad061142c1654e097c6f9a2014f3880b4290dd4070d07c89aaf2c08d80ea8"} Jan 28 10:45:03 crc kubenswrapper[4735]: I0128 10:45:03.536359 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw" Jan 28 10:45:03 crc kubenswrapper[4735]: I0128 10:45:03.685941 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/814a4c44-26c8-4260-8e94-862dac905ed5-config-volume\") pod \"814a4c44-26c8-4260-8e94-862dac905ed5\" (UID: \"814a4c44-26c8-4260-8e94-862dac905ed5\") " Jan 28 10:45:03 crc kubenswrapper[4735]: I0128 10:45:03.685986 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/814a4c44-26c8-4260-8e94-862dac905ed5-secret-volume\") pod \"814a4c44-26c8-4260-8e94-862dac905ed5\" (UID: \"814a4c44-26c8-4260-8e94-862dac905ed5\") " Jan 28 10:45:03 crc kubenswrapper[4735]: I0128 10:45:03.686054 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fvmf\" (UniqueName: \"kubernetes.io/projected/814a4c44-26c8-4260-8e94-862dac905ed5-kube-api-access-4fvmf\") pod \"814a4c44-26c8-4260-8e94-862dac905ed5\" (UID: \"814a4c44-26c8-4260-8e94-862dac905ed5\") " Jan 28 10:45:03 crc kubenswrapper[4735]: I0128 10:45:03.686632 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/814a4c44-26c8-4260-8e94-862dac905ed5-config-volume" (OuterVolumeSpecName: "config-volume") pod "814a4c44-26c8-4260-8e94-862dac905ed5" (UID: "814a4c44-26c8-4260-8e94-862dac905ed5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 10:45:03 crc kubenswrapper[4735]: I0128 10:45:03.691649 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/814a4c44-26c8-4260-8e94-862dac905ed5-kube-api-access-4fvmf" (OuterVolumeSpecName: "kube-api-access-4fvmf") pod "814a4c44-26c8-4260-8e94-862dac905ed5" (UID: "814a4c44-26c8-4260-8e94-862dac905ed5"). InnerVolumeSpecName "kube-api-access-4fvmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:45:03 crc kubenswrapper[4735]: I0128 10:45:03.692180 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/814a4c44-26c8-4260-8e94-862dac905ed5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "814a4c44-26c8-4260-8e94-862dac905ed5" (UID: "814a4c44-26c8-4260-8e94-862dac905ed5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 10:45:03 crc kubenswrapper[4735]: I0128 10:45:03.788156 4735 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/814a4c44-26c8-4260-8e94-862dac905ed5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 10:45:03 crc kubenswrapper[4735]: I0128 10:45:03.788391 4735 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/814a4c44-26c8-4260-8e94-862dac905ed5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 10:45:03 crc kubenswrapper[4735]: I0128 10:45:03.788403 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fvmf\" (UniqueName: \"kubernetes.io/projected/814a4c44-26c8-4260-8e94-862dac905ed5-kube-api-access-4fvmf\") on node \"crc\" DevicePath \"\"" Jan 28 10:45:04 crc kubenswrapper[4735]: I0128 10:45:04.194767 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw" event={"ID":"814a4c44-26c8-4260-8e94-862dac905ed5","Type":"ContainerDied","Data":"582f83a4fcdf054d89b85c3c99fce5717d065c64b1cd39de57c775c68392c538"} Jan 28 10:45:04 crc kubenswrapper[4735]: I0128 10:45:04.194812 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="582f83a4fcdf054d89b85c3c99fce5717d065c64b1cd39de57c775c68392c538" Jan 28 10:45:04 crc kubenswrapper[4735]: I0128 10:45:04.194869 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493285-wmgnw" Jan 28 10:45:04 crc kubenswrapper[4735]: I0128 10:45:04.503539 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:45:04 crc kubenswrapper[4735]: E0128 10:45:04.504112 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:45:04 crc kubenswrapper[4735]: I0128 10:45:04.633782 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh"] Jan 28 10:45:04 crc kubenswrapper[4735]: I0128 10:45:04.650491 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493240-8spzh"] Jan 28 10:45:05 crc kubenswrapper[4735]: I0128 10:45:05.523267 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b969dfd-602d-47fb-a56e-f059eb358442" path="/var/lib/kubelet/pods/9b969dfd-602d-47fb-a56e-f059eb358442/volumes" Jan 28 10:45:17 crc kubenswrapper[4735]: I0128 10:45:17.531263 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:45:17 crc kubenswrapper[4735]: E0128 10:45:17.532287 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:45:32 crc kubenswrapper[4735]: I0128 10:45:32.502808 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:45:32 crc kubenswrapper[4735]: E0128 10:45:32.504872 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:45:47 crc kubenswrapper[4735]: I0128 10:45:47.518322 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:45:47 crc kubenswrapper[4735]: E0128 10:45:47.519804 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:45:51 crc kubenswrapper[4735]: I0128 10:45:51.708865 4735 generic.go:334] "Generic (PLEG): container finished" podID="d10fba85-6c88-4282-85db-b2f5a283943f" containerID="1b847d938fb9489bad1e20869fbfc9a264ea55050977a3eba671df6fc676e89f" exitCode=0 Jan 28 10:45:51 crc kubenswrapper[4735]: I0128 10:45:51.708966 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jg5dj/must-gather-g8wtq" event={"ID":"d10fba85-6c88-4282-85db-b2f5a283943f","Type":"ContainerDied","Data":"1b847d938fb9489bad1e20869fbfc9a264ea55050977a3eba671df6fc676e89f"} Jan 28 10:45:51 crc kubenswrapper[4735]: I0128 10:45:51.710169 4735 scope.go:117] "RemoveContainer" containerID="1b847d938fb9489bad1e20869fbfc9a264ea55050977a3eba671df6fc676e89f" Jan 28 10:45:51 crc kubenswrapper[4735]: I0128 10:45:51.934563 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jg5dj_must-gather-g8wtq_d10fba85-6c88-4282-85db-b2f5a283943f/gather/0.log" Jan 28 10:45:55 crc kubenswrapper[4735]: I0128 10:45:55.812397 4735 scope.go:117] "RemoveContainer" containerID="d52b8c76e71754a0eb32594a68f868bbe1bfee5273e25a2e13649945f3056fbb" Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.211867 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jg5dj/must-gather-g8wtq"] Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.214595 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jg5dj/must-gather-g8wtq" podUID="d10fba85-6c88-4282-85db-b2f5a283943f" containerName="copy" containerID="cri-o://10dd58eeb153a58578c47c52717ce45c535c3cf722868dcef2f2af8d5ec8d997" gracePeriod=2 Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.221020 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jg5dj/must-gather-g8wtq"] Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.683027 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jg5dj_must-gather-g8wtq_d10fba85-6c88-4282-85db-b2f5a283943f/copy/0.log" Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.683919 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg5dj/must-gather-g8wtq" Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.794317 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxd88\" (UniqueName: \"kubernetes.io/projected/d10fba85-6c88-4282-85db-b2f5a283943f-kube-api-access-bxd88\") pod \"d10fba85-6c88-4282-85db-b2f5a283943f\" (UID: \"d10fba85-6c88-4282-85db-b2f5a283943f\") " Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.794593 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d10fba85-6c88-4282-85db-b2f5a283943f-must-gather-output\") pod \"d10fba85-6c88-4282-85db-b2f5a283943f\" (UID: \"d10fba85-6c88-4282-85db-b2f5a283943f\") " Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.794888 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jg5dj_must-gather-g8wtq_d10fba85-6c88-4282-85db-b2f5a283943f/copy/0.log" Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.796089 4735 generic.go:334] "Generic (PLEG): container finished" podID="d10fba85-6c88-4282-85db-b2f5a283943f" containerID="10dd58eeb153a58578c47c52717ce45c535c3cf722868dcef2f2af8d5ec8d997" exitCode=143 Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.796155 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jg5dj/must-gather-g8wtq" Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.796161 4735 scope.go:117] "RemoveContainer" containerID="10dd58eeb153a58578c47c52717ce45c535c3cf722868dcef2f2af8d5ec8d997" Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.802822 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d10fba85-6c88-4282-85db-b2f5a283943f-kube-api-access-bxd88" (OuterVolumeSpecName: "kube-api-access-bxd88") pod "d10fba85-6c88-4282-85db-b2f5a283943f" (UID: "d10fba85-6c88-4282-85db-b2f5a283943f"). InnerVolumeSpecName "kube-api-access-bxd88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.864199 4735 scope.go:117] "RemoveContainer" containerID="1b847d938fb9489bad1e20869fbfc9a264ea55050977a3eba671df6fc676e89f" Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.897609 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxd88\" (UniqueName: \"kubernetes.io/projected/d10fba85-6c88-4282-85db-b2f5a283943f-kube-api-access-bxd88\") on node \"crc\" DevicePath \"\"" Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.944306 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d10fba85-6c88-4282-85db-b2f5a283943f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d10fba85-6c88-4282-85db-b2f5a283943f" (UID: "d10fba85-6c88-4282-85db-b2f5a283943f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.948071 4735 scope.go:117] "RemoveContainer" containerID="10dd58eeb153a58578c47c52717ce45c535c3cf722868dcef2f2af8d5ec8d997" Jan 28 10:45:59 crc kubenswrapper[4735]: E0128 10:45:59.948805 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10dd58eeb153a58578c47c52717ce45c535c3cf722868dcef2f2af8d5ec8d997\": container with ID starting with 10dd58eeb153a58578c47c52717ce45c535c3cf722868dcef2f2af8d5ec8d997 not found: ID does not exist" containerID="10dd58eeb153a58578c47c52717ce45c535c3cf722868dcef2f2af8d5ec8d997" Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.948842 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10dd58eeb153a58578c47c52717ce45c535c3cf722868dcef2f2af8d5ec8d997"} err="failed to get container status \"10dd58eeb153a58578c47c52717ce45c535c3cf722868dcef2f2af8d5ec8d997\": rpc error: code = NotFound desc = could not find container \"10dd58eeb153a58578c47c52717ce45c535c3cf722868dcef2f2af8d5ec8d997\": container with ID starting with 10dd58eeb153a58578c47c52717ce45c535c3cf722868dcef2f2af8d5ec8d997 not found: ID does not exist" Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.948864 4735 scope.go:117] "RemoveContainer" containerID="1b847d938fb9489bad1e20869fbfc9a264ea55050977a3eba671df6fc676e89f" Jan 28 10:45:59 crc kubenswrapper[4735]: E0128 10:45:59.949128 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b847d938fb9489bad1e20869fbfc9a264ea55050977a3eba671df6fc676e89f\": container with ID starting with 1b847d938fb9489bad1e20869fbfc9a264ea55050977a3eba671df6fc676e89f not found: ID does not exist" containerID="1b847d938fb9489bad1e20869fbfc9a264ea55050977a3eba671df6fc676e89f" Jan 28 10:45:59 crc kubenswrapper[4735]: I0128 10:45:59.949147 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b847d938fb9489bad1e20869fbfc9a264ea55050977a3eba671df6fc676e89f"} err="failed to get container status \"1b847d938fb9489bad1e20869fbfc9a264ea55050977a3eba671df6fc676e89f\": rpc error: code = NotFound desc = could not find container \"1b847d938fb9489bad1e20869fbfc9a264ea55050977a3eba671df6fc676e89f\": container with ID starting with 1b847d938fb9489bad1e20869fbfc9a264ea55050977a3eba671df6fc676e89f not found: ID does not exist" Jan 28 10:46:00 crc kubenswrapper[4735]: I0128 10:46:00.001181 4735 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d10fba85-6c88-4282-85db-b2f5a283943f-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 10:46:01 crc kubenswrapper[4735]: I0128 10:46:01.513411 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d10fba85-6c88-4282-85db-b2f5a283943f" path="/var/lib/kubelet/pods/d10fba85-6c88-4282-85db-b2f5a283943f/volumes" Jan 28 10:46:02 crc kubenswrapper[4735]: I0128 10:46:02.503074 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:46:02 crc kubenswrapper[4735]: E0128 10:46:02.503900 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:46:13 crc kubenswrapper[4735]: I0128 10:46:13.502929 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:46:13 crc kubenswrapper[4735]: E0128 10:46:13.504002 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:46:24 crc kubenswrapper[4735]: I0128 10:46:24.503289 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:46:24 crc kubenswrapper[4735]: E0128 10:46:24.504139 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:46:38 crc kubenswrapper[4735]: I0128 10:46:38.503018 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:46:38 crc kubenswrapper[4735]: E0128 10:46:38.505188 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:46:49 crc kubenswrapper[4735]: I0128 10:46:49.503520 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:46:49 crc kubenswrapper[4735]: E0128 10:46:49.504616 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.646632 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q8ktn"] Jan 28 10:46:57 crc kubenswrapper[4735]: E0128 10:46:57.647657 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d10fba85-6c88-4282-85db-b2f5a283943f" containerName="copy" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.647670 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="d10fba85-6c88-4282-85db-b2f5a283943f" containerName="copy" Jan 28 10:46:57 crc kubenswrapper[4735]: E0128 10:46:57.647683 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d10fba85-6c88-4282-85db-b2f5a283943f" containerName="gather" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.647688 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="d10fba85-6c88-4282-85db-b2f5a283943f" containerName="gather" Jan 28 10:46:57 crc kubenswrapper[4735]: E0128 10:46:57.647701 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814a4c44-26c8-4260-8e94-862dac905ed5" containerName="collect-profiles" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.647709 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="814a4c44-26c8-4260-8e94-862dac905ed5" containerName="collect-profiles" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.647920 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="814a4c44-26c8-4260-8e94-862dac905ed5" containerName="collect-profiles" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.647934 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="d10fba85-6c88-4282-85db-b2f5a283943f" containerName="gather" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.647957 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="d10fba85-6c88-4282-85db-b2f5a283943f" containerName="copy" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.649505 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q8ktn" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.659696 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q8ktn"] Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.801731 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x96k\" (UniqueName: \"kubernetes.io/projected/99aff266-c536-4f20-be0a-bc245b6e9c87-kube-api-access-4x96k\") pod \"certified-operators-q8ktn\" (UID: \"99aff266-c536-4f20-be0a-bc245b6e9c87\") " pod="openshift-marketplace/certified-operators-q8ktn" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.801949 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99aff266-c536-4f20-be0a-bc245b6e9c87-utilities\") pod \"certified-operators-q8ktn\" (UID: \"99aff266-c536-4f20-be0a-bc245b6e9c87\") " pod="openshift-marketplace/certified-operators-q8ktn" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.801988 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99aff266-c536-4f20-be0a-bc245b6e9c87-catalog-content\") pod \"certified-operators-q8ktn\" (UID: \"99aff266-c536-4f20-be0a-bc245b6e9c87\") " pod="openshift-marketplace/certified-operators-q8ktn" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.903824 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x96k\" (UniqueName: \"kubernetes.io/projected/99aff266-c536-4f20-be0a-bc245b6e9c87-kube-api-access-4x96k\") pod \"certified-operators-q8ktn\" (UID: \"99aff266-c536-4f20-be0a-bc245b6e9c87\") " pod="openshift-marketplace/certified-operators-q8ktn" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.904390 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99aff266-c536-4f20-be0a-bc245b6e9c87-utilities\") pod \"certified-operators-q8ktn\" (UID: \"99aff266-c536-4f20-be0a-bc245b6e9c87\") " pod="openshift-marketplace/certified-operators-q8ktn" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.904424 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99aff266-c536-4f20-be0a-bc245b6e9c87-catalog-content\") pod \"certified-operators-q8ktn\" (UID: \"99aff266-c536-4f20-be0a-bc245b6e9c87\") " pod="openshift-marketplace/certified-operators-q8ktn" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.904904 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99aff266-c536-4f20-be0a-bc245b6e9c87-utilities\") pod \"certified-operators-q8ktn\" (UID: \"99aff266-c536-4f20-be0a-bc245b6e9c87\") " pod="openshift-marketplace/certified-operators-q8ktn" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.904937 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99aff266-c536-4f20-be0a-bc245b6e9c87-catalog-content\") pod \"certified-operators-q8ktn\" (UID: \"99aff266-c536-4f20-be0a-bc245b6e9c87\") " pod="openshift-marketplace/certified-operators-q8ktn" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.926328 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x96k\" (UniqueName: \"kubernetes.io/projected/99aff266-c536-4f20-be0a-bc245b6e9c87-kube-api-access-4x96k\") pod \"certified-operators-q8ktn\" (UID: \"99aff266-c536-4f20-be0a-bc245b6e9c87\") " pod="openshift-marketplace/certified-operators-q8ktn" Jan 28 10:46:57 crc kubenswrapper[4735]: I0128 10:46:57.986922 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q8ktn" Jan 28 10:46:58 crc kubenswrapper[4735]: I0128 10:46:58.462870 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q8ktn"] Jan 28 10:46:58 crc kubenswrapper[4735]: I0128 10:46:58.511381 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8ktn" event={"ID":"99aff266-c536-4f20-be0a-bc245b6e9c87","Type":"ContainerStarted","Data":"eb94c0a2f9e8f2af5d1e84c9051104b35252aed7d8ecee35bb43de20b3cfda4d"} Jan 28 10:46:59 crc kubenswrapper[4735]: I0128 10:46:59.536194 4735 generic.go:334] "Generic (PLEG): container finished" podID="99aff266-c536-4f20-be0a-bc245b6e9c87" containerID="3be7b75eb8bdcef1aa3ef002e0c310de10c5150f088a3bfc8dd47dd5c514eb26" exitCode=0 Jan 28 10:46:59 crc kubenswrapper[4735]: I0128 10:46:59.536283 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8ktn" event={"ID":"99aff266-c536-4f20-be0a-bc245b6e9c87","Type":"ContainerDied","Data":"3be7b75eb8bdcef1aa3ef002e0c310de10c5150f088a3bfc8dd47dd5c514eb26"} Jan 28 10:46:59 crc kubenswrapper[4735]: I0128 10:46:59.543482 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 10:47:01 crc kubenswrapper[4735]: I0128 10:47:01.556552 4735 generic.go:334] "Generic (PLEG): container finished" podID="99aff266-c536-4f20-be0a-bc245b6e9c87" containerID="13ff0450c8e58918277bff399e303a8cf2c9e6ee881d37a592d23d3b218f2957" exitCode=0 Jan 28 10:47:01 crc kubenswrapper[4735]: I0128 10:47:01.556627 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8ktn" event={"ID":"99aff266-c536-4f20-be0a-bc245b6e9c87","Type":"ContainerDied","Data":"13ff0450c8e58918277bff399e303a8cf2c9e6ee881d37a592d23d3b218f2957"} Jan 28 10:47:02 crc kubenswrapper[4735]: I0128 10:47:02.591198 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8ktn" event={"ID":"99aff266-c536-4f20-be0a-bc245b6e9c87","Type":"ContainerStarted","Data":"42c6964f7d1c71f8ea6b2e4dc31b3a6b68497871390672ec7f08651df6f51ba7"} Jan 28 10:47:02 crc kubenswrapper[4735]: I0128 10:47:02.626360 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q8ktn" podStartSLOduration=3.187098429 podStartE2EDuration="5.62633524s" podCreationTimestamp="2026-01-28 10:46:57 +0000 UTC" firstStartedPulling="2026-01-28 10:46:59.543091098 +0000 UTC m=+3872.692755481" lastFinishedPulling="2026-01-28 10:47:01.982327899 +0000 UTC m=+3875.131992292" observedRunningTime="2026-01-28 10:47:02.61864032 +0000 UTC m=+3875.768304693" watchObservedRunningTime="2026-01-28 10:47:02.62633524 +0000 UTC m=+3875.775999613" Jan 28 10:47:04 crc kubenswrapper[4735]: I0128 10:47:04.502515 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:47:04 crc kubenswrapper[4735]: E0128 10:47:04.503290 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:47:07 crc kubenswrapper[4735]: I0128 10:47:07.987435 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q8ktn" Jan 28 10:47:07 crc kubenswrapper[4735]: I0128 10:47:07.989480 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q8ktn" Jan 28 10:47:08 crc kubenswrapper[4735]: I0128 10:47:08.059424 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q8ktn" Jan 28 10:47:08 crc kubenswrapper[4735]: I0128 10:47:08.702798 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q8ktn" Jan 28 10:47:08 crc kubenswrapper[4735]: I0128 10:47:08.766626 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q8ktn"] Jan 28 10:47:10 crc kubenswrapper[4735]: I0128 10:47:10.672710 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q8ktn" podUID="99aff266-c536-4f20-be0a-bc245b6e9c87" containerName="registry-server" containerID="cri-o://42c6964f7d1c71f8ea6b2e4dc31b3a6b68497871390672ec7f08651df6f51ba7" gracePeriod=2 Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.141037 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q8ktn" Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.291339 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99aff266-c536-4f20-be0a-bc245b6e9c87-utilities\") pod \"99aff266-c536-4f20-be0a-bc245b6e9c87\" (UID: \"99aff266-c536-4f20-be0a-bc245b6e9c87\") " Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.291422 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x96k\" (UniqueName: \"kubernetes.io/projected/99aff266-c536-4f20-be0a-bc245b6e9c87-kube-api-access-4x96k\") pod \"99aff266-c536-4f20-be0a-bc245b6e9c87\" (UID: \"99aff266-c536-4f20-be0a-bc245b6e9c87\") " Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.291627 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99aff266-c536-4f20-be0a-bc245b6e9c87-catalog-content\") pod \"99aff266-c536-4f20-be0a-bc245b6e9c87\" (UID: \"99aff266-c536-4f20-be0a-bc245b6e9c87\") " Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.294852 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99aff266-c536-4f20-be0a-bc245b6e9c87-utilities" (OuterVolumeSpecName: "utilities") pod "99aff266-c536-4f20-be0a-bc245b6e9c87" (UID: "99aff266-c536-4f20-be0a-bc245b6e9c87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.299489 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99aff266-c536-4f20-be0a-bc245b6e9c87-kube-api-access-4x96k" (OuterVolumeSpecName: "kube-api-access-4x96k") pod "99aff266-c536-4f20-be0a-bc245b6e9c87" (UID: "99aff266-c536-4f20-be0a-bc245b6e9c87"). InnerVolumeSpecName "kube-api-access-4x96k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.366850 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99aff266-c536-4f20-be0a-bc245b6e9c87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99aff266-c536-4f20-be0a-bc245b6e9c87" (UID: "99aff266-c536-4f20-be0a-bc245b6e9c87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.393863 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99aff266-c536-4f20-be0a-bc245b6e9c87-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.393921 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x96k\" (UniqueName: \"kubernetes.io/projected/99aff266-c536-4f20-be0a-bc245b6e9c87-kube-api-access-4x96k\") on node \"crc\" DevicePath \"\"" Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.393940 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99aff266-c536-4f20-be0a-bc245b6e9c87-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.689938 4735 generic.go:334] "Generic (PLEG): container finished" podID="99aff266-c536-4f20-be0a-bc245b6e9c87" containerID="42c6964f7d1c71f8ea6b2e4dc31b3a6b68497871390672ec7f08651df6f51ba7" exitCode=0 Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.690053 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q8ktn" Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.690048 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8ktn" event={"ID":"99aff266-c536-4f20-be0a-bc245b6e9c87","Type":"ContainerDied","Data":"42c6964f7d1c71f8ea6b2e4dc31b3a6b68497871390672ec7f08651df6f51ba7"} Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.690540 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8ktn" event={"ID":"99aff266-c536-4f20-be0a-bc245b6e9c87","Type":"ContainerDied","Data":"eb94c0a2f9e8f2af5d1e84c9051104b35252aed7d8ecee35bb43de20b3cfda4d"} Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.690567 4735 scope.go:117] "RemoveContainer" containerID="42c6964f7d1c71f8ea6b2e4dc31b3a6b68497871390672ec7f08651df6f51ba7" Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.721631 4735 scope.go:117] "RemoveContainer" containerID="13ff0450c8e58918277bff399e303a8cf2c9e6ee881d37a592d23d3b218f2957" Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.750887 4735 scope.go:117] "RemoveContainer" containerID="3be7b75eb8bdcef1aa3ef002e0c310de10c5150f088a3bfc8dd47dd5c514eb26" Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.752013 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q8ktn"] Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.765489 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q8ktn"] Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.807626 4735 scope.go:117] "RemoveContainer" containerID="42c6964f7d1c71f8ea6b2e4dc31b3a6b68497871390672ec7f08651df6f51ba7" Jan 28 10:47:11 crc kubenswrapper[4735]: E0128 10:47:11.808281 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42c6964f7d1c71f8ea6b2e4dc31b3a6b68497871390672ec7f08651df6f51ba7\": container with ID starting with 42c6964f7d1c71f8ea6b2e4dc31b3a6b68497871390672ec7f08651df6f51ba7 not found: ID does not exist" containerID="42c6964f7d1c71f8ea6b2e4dc31b3a6b68497871390672ec7f08651df6f51ba7" Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.808416 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42c6964f7d1c71f8ea6b2e4dc31b3a6b68497871390672ec7f08651df6f51ba7"} err="failed to get container status \"42c6964f7d1c71f8ea6b2e4dc31b3a6b68497871390672ec7f08651df6f51ba7\": rpc error: code = NotFound desc = could not find container \"42c6964f7d1c71f8ea6b2e4dc31b3a6b68497871390672ec7f08651df6f51ba7\": container with ID starting with 42c6964f7d1c71f8ea6b2e4dc31b3a6b68497871390672ec7f08651df6f51ba7 not found: ID does not exist" Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.808539 4735 scope.go:117] "RemoveContainer" containerID="13ff0450c8e58918277bff399e303a8cf2c9e6ee881d37a592d23d3b218f2957" Jan 28 10:47:11 crc kubenswrapper[4735]: E0128 10:47:11.809173 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13ff0450c8e58918277bff399e303a8cf2c9e6ee881d37a592d23d3b218f2957\": container with ID starting with 13ff0450c8e58918277bff399e303a8cf2c9e6ee881d37a592d23d3b218f2957 not found: ID does not exist" containerID="13ff0450c8e58918277bff399e303a8cf2c9e6ee881d37a592d23d3b218f2957" Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.809213 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ff0450c8e58918277bff399e303a8cf2c9e6ee881d37a592d23d3b218f2957"} err="failed to get container status \"13ff0450c8e58918277bff399e303a8cf2c9e6ee881d37a592d23d3b218f2957\": rpc error: code = NotFound desc = could not find container \"13ff0450c8e58918277bff399e303a8cf2c9e6ee881d37a592d23d3b218f2957\": container with ID starting with 13ff0450c8e58918277bff399e303a8cf2c9e6ee881d37a592d23d3b218f2957 not found: ID does not exist" Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.809240 4735 scope.go:117] "RemoveContainer" containerID="3be7b75eb8bdcef1aa3ef002e0c310de10c5150f088a3bfc8dd47dd5c514eb26" Jan 28 10:47:11 crc kubenswrapper[4735]: E0128 10:47:11.809822 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3be7b75eb8bdcef1aa3ef002e0c310de10c5150f088a3bfc8dd47dd5c514eb26\": container with ID starting with 3be7b75eb8bdcef1aa3ef002e0c310de10c5150f088a3bfc8dd47dd5c514eb26 not found: ID does not exist" containerID="3be7b75eb8bdcef1aa3ef002e0c310de10c5150f088a3bfc8dd47dd5c514eb26" Jan 28 10:47:11 crc kubenswrapper[4735]: I0128 10:47:11.809870 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be7b75eb8bdcef1aa3ef002e0c310de10c5150f088a3bfc8dd47dd5c514eb26"} err="failed to get container status \"3be7b75eb8bdcef1aa3ef002e0c310de10c5150f088a3bfc8dd47dd5c514eb26\": rpc error: code = NotFound desc = could not find container \"3be7b75eb8bdcef1aa3ef002e0c310de10c5150f088a3bfc8dd47dd5c514eb26\": container with ID starting with 3be7b75eb8bdcef1aa3ef002e0c310de10c5150f088a3bfc8dd47dd5c514eb26 not found: ID does not exist" Jan 28 10:47:13 crc kubenswrapper[4735]: I0128 10:47:13.522942 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99aff266-c536-4f20-be0a-bc245b6e9c87" path="/var/lib/kubelet/pods/99aff266-c536-4f20-be0a-bc245b6e9c87/volumes" Jan 28 10:47:16 crc kubenswrapper[4735]: I0128 10:47:16.503047 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:47:16 crc kubenswrapper[4735]: E0128 10:47:16.503662 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:47:31 crc kubenswrapper[4735]: I0128 10:47:31.503666 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:47:31 crc kubenswrapper[4735]: E0128 10:47:31.504907 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:47:43 crc kubenswrapper[4735]: I0128 10:47:43.504590 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:47:44 crc kubenswrapper[4735]: I0128 10:47:44.448234 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"00942994fe73689ac14efdb488c14b528c781f0c4569d2a1e97e5dd87ba17b37"} Jan 28 10:47:55 crc kubenswrapper[4735]: I0128 10:47:55.906857 4735 scope.go:117] "RemoveContainer" containerID="14c3ec9cbab122f01263879e59dd188f7ea28c08375dc079449b098176c1051e" Jan 28 10:48:53 crc kubenswrapper[4735]: I0128 10:48:53.363664 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5mmtg/must-gather-2jqfn"] Jan 28 10:48:53 crc kubenswrapper[4735]: E0128 10:48:53.364693 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99aff266-c536-4f20-be0a-bc245b6e9c87" containerName="extract-content" Jan 28 10:48:53 crc kubenswrapper[4735]: I0128 10:48:53.364705 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="99aff266-c536-4f20-be0a-bc245b6e9c87" containerName="extract-content" Jan 28 10:48:53 crc kubenswrapper[4735]: E0128 10:48:53.364716 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99aff266-c536-4f20-be0a-bc245b6e9c87" containerName="registry-server" Jan 28 10:48:53 crc kubenswrapper[4735]: I0128 10:48:53.364722 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="99aff266-c536-4f20-be0a-bc245b6e9c87" containerName="registry-server" Jan 28 10:48:53 crc kubenswrapper[4735]: E0128 10:48:53.364738 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99aff266-c536-4f20-be0a-bc245b6e9c87" containerName="extract-utilities" Jan 28 10:48:53 crc kubenswrapper[4735]: I0128 10:48:53.364745 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="99aff266-c536-4f20-be0a-bc245b6e9c87" containerName="extract-utilities" Jan 28 10:48:53 crc kubenswrapper[4735]: I0128 10:48:53.364949 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="99aff266-c536-4f20-be0a-bc245b6e9c87" containerName="registry-server" Jan 28 10:48:53 crc kubenswrapper[4735]: I0128 10:48:53.365879 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5mmtg/must-gather-2jqfn" Jan 28 10:48:53 crc kubenswrapper[4735]: I0128 10:48:53.368158 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5mmtg"/"openshift-service-ca.crt" Jan 28 10:48:53 crc kubenswrapper[4735]: I0128 10:48:53.368377 4735 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5mmtg"/"kube-root-ca.crt" Jan 28 10:48:53 crc kubenswrapper[4735]: I0128 10:48:53.368805 4735 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5mmtg"/"default-dockercfg-twp47" Jan 28 10:48:53 crc kubenswrapper[4735]: I0128 10:48:53.372569 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5mmtg/must-gather-2jqfn"] Jan 28 10:48:53 crc kubenswrapper[4735]: I0128 10:48:53.462069 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-558wj\" (UniqueName: \"kubernetes.io/projected/115a5701-253b-423c-8989-baddf5df509d-kube-api-access-558wj\") pod \"must-gather-2jqfn\" (UID: \"115a5701-253b-423c-8989-baddf5df509d\") " pod="openshift-must-gather-5mmtg/must-gather-2jqfn" Jan 28 10:48:53 crc kubenswrapper[4735]: I0128 10:48:53.462142 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/115a5701-253b-423c-8989-baddf5df509d-must-gather-output\") pod \"must-gather-2jqfn\" (UID: \"115a5701-253b-423c-8989-baddf5df509d\") " pod="openshift-must-gather-5mmtg/must-gather-2jqfn" Jan 28 10:48:53 crc kubenswrapper[4735]: I0128 10:48:53.564310 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-558wj\" (UniqueName: \"kubernetes.io/projected/115a5701-253b-423c-8989-baddf5df509d-kube-api-access-558wj\") pod \"must-gather-2jqfn\" (UID: \"115a5701-253b-423c-8989-baddf5df509d\") " pod="openshift-must-gather-5mmtg/must-gather-2jqfn" Jan 28 10:48:53 crc kubenswrapper[4735]: I0128 10:48:53.564380 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/115a5701-253b-423c-8989-baddf5df509d-must-gather-output\") pod \"must-gather-2jqfn\" (UID: \"115a5701-253b-423c-8989-baddf5df509d\") " pod="openshift-must-gather-5mmtg/must-gather-2jqfn" Jan 28 10:48:53 crc kubenswrapper[4735]: I0128 10:48:53.564914 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/115a5701-253b-423c-8989-baddf5df509d-must-gather-output\") pod \"must-gather-2jqfn\" (UID: \"115a5701-253b-423c-8989-baddf5df509d\") " pod="openshift-must-gather-5mmtg/must-gather-2jqfn" Jan 28 10:48:53 crc kubenswrapper[4735]: I0128 10:48:53.584968 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-558wj\" (UniqueName: \"kubernetes.io/projected/115a5701-253b-423c-8989-baddf5df509d-kube-api-access-558wj\") pod \"must-gather-2jqfn\" (UID: \"115a5701-253b-423c-8989-baddf5df509d\") " pod="openshift-must-gather-5mmtg/must-gather-2jqfn" Jan 28 10:48:53 crc kubenswrapper[4735]: I0128 10:48:53.682316 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5mmtg/must-gather-2jqfn" Jan 28 10:48:54 crc kubenswrapper[4735]: I0128 10:48:54.158130 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5mmtg/must-gather-2jqfn"] Jan 28 10:48:55 crc kubenswrapper[4735]: I0128 10:48:55.155594 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5mmtg/must-gather-2jqfn" event={"ID":"115a5701-253b-423c-8989-baddf5df509d","Type":"ContainerStarted","Data":"ca4172f0a09f75f365d3693d5f2c0ffed44a8e067ec8a4f2fce3e844ca11e7b3"} Jan 28 10:48:55 crc kubenswrapper[4735]: I0128 10:48:55.155974 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5mmtg/must-gather-2jqfn" event={"ID":"115a5701-253b-423c-8989-baddf5df509d","Type":"ContainerStarted","Data":"d085f490fc70e4f3be85dcce31ae9bb2ef43ba9b7b2ed6b6d6b8341696c7fb67"} Jan 28 10:48:55 crc kubenswrapper[4735]: I0128 10:48:55.155986 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5mmtg/must-gather-2jqfn" event={"ID":"115a5701-253b-423c-8989-baddf5df509d","Type":"ContainerStarted","Data":"aa4f63a250f5444418f3fd55faf8a2dc545982abbf61e9b7653a8f85ded4f748"} Jan 28 10:48:55 crc kubenswrapper[4735]: I0128 10:48:55.171398 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5mmtg/must-gather-2jqfn" podStartSLOduration=2.171378435 podStartE2EDuration="2.171378435s" podCreationTimestamp="2026-01-28 10:48:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:48:55.168457733 +0000 UTC m=+3988.318122116" watchObservedRunningTime="2026-01-28 10:48:55.171378435 +0000 UTC m=+3988.321042808" Jan 28 10:48:58 crc kubenswrapper[4735]: I0128 10:48:58.930704 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5mmtg/crc-debug-tqlhr"] Jan 28 10:48:58 crc kubenswrapper[4735]: I0128 10:48:58.932619 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5mmtg/crc-debug-tqlhr" Jan 28 10:48:59 crc kubenswrapper[4735]: I0128 10:48:59.085475 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkfd4\" (UniqueName: \"kubernetes.io/projected/f1a2758b-310b-4618-a13f-e3ee333d316d-kube-api-access-dkfd4\") pod \"crc-debug-tqlhr\" (UID: \"f1a2758b-310b-4618-a13f-e3ee333d316d\") " pod="openshift-must-gather-5mmtg/crc-debug-tqlhr" Jan 28 10:48:59 crc kubenswrapper[4735]: I0128 10:48:59.085768 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1a2758b-310b-4618-a13f-e3ee333d316d-host\") pod \"crc-debug-tqlhr\" (UID: \"f1a2758b-310b-4618-a13f-e3ee333d316d\") " pod="openshift-must-gather-5mmtg/crc-debug-tqlhr" Jan 28 10:48:59 crc kubenswrapper[4735]: I0128 10:48:59.188978 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1a2758b-310b-4618-a13f-e3ee333d316d-host\") pod \"crc-debug-tqlhr\" (UID: \"f1a2758b-310b-4618-a13f-e3ee333d316d\") " pod="openshift-must-gather-5mmtg/crc-debug-tqlhr" Jan 28 10:48:59 crc kubenswrapper[4735]: I0128 10:48:59.189063 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1a2758b-310b-4618-a13f-e3ee333d316d-host\") pod \"crc-debug-tqlhr\" (UID: \"f1a2758b-310b-4618-a13f-e3ee333d316d\") " pod="openshift-must-gather-5mmtg/crc-debug-tqlhr" Jan 28 10:48:59 crc kubenswrapper[4735]: I0128 10:48:59.189216 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkfd4\" (UniqueName: \"kubernetes.io/projected/f1a2758b-310b-4618-a13f-e3ee333d316d-kube-api-access-dkfd4\") pod \"crc-debug-tqlhr\" (UID: \"f1a2758b-310b-4618-a13f-e3ee333d316d\") " pod="openshift-must-gather-5mmtg/crc-debug-tqlhr" Jan 28 10:48:59 crc kubenswrapper[4735]: I0128 10:48:59.219502 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkfd4\" (UniqueName: \"kubernetes.io/projected/f1a2758b-310b-4618-a13f-e3ee333d316d-kube-api-access-dkfd4\") pod \"crc-debug-tqlhr\" (UID: \"f1a2758b-310b-4618-a13f-e3ee333d316d\") " pod="openshift-must-gather-5mmtg/crc-debug-tqlhr" Jan 28 10:48:59 crc kubenswrapper[4735]: I0128 10:48:59.258763 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5mmtg/crc-debug-tqlhr" Jan 28 10:49:00 crc kubenswrapper[4735]: I0128 10:49:00.244263 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5mmtg/crc-debug-tqlhr" event={"ID":"f1a2758b-310b-4618-a13f-e3ee333d316d","Type":"ContainerStarted","Data":"331107ac0f1545b8649483e99bd4994ac003017b721505abae91935a23ee1505"} Jan 28 10:49:00 crc kubenswrapper[4735]: I0128 10:49:00.245205 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5mmtg/crc-debug-tqlhr" event={"ID":"f1a2758b-310b-4618-a13f-e3ee333d316d","Type":"ContainerStarted","Data":"6db42595ec02459a89766b0a2a57faa8376d807a3a571de8851e7dcce3a2684c"} Jan 28 10:49:00 crc kubenswrapper[4735]: I0128 10:49:00.268088 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5mmtg/crc-debug-tqlhr" podStartSLOduration=2.2680554969999998 podStartE2EDuration="2.268055497s" podCreationTimestamp="2026-01-28 10:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:49:00.258434821 +0000 UTC m=+3993.408099204" watchObservedRunningTime="2026-01-28 10:49:00.268055497 +0000 UTC m=+3993.417719900" Jan 28 10:49:33 crc kubenswrapper[4735]: I0128 10:49:33.521173 4735 generic.go:334] "Generic (PLEG): container finished" podID="f1a2758b-310b-4618-a13f-e3ee333d316d" containerID="331107ac0f1545b8649483e99bd4994ac003017b721505abae91935a23ee1505" exitCode=0 Jan 28 10:49:33 crc kubenswrapper[4735]: I0128 10:49:33.524736 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5mmtg/crc-debug-tqlhr" event={"ID":"f1a2758b-310b-4618-a13f-e3ee333d316d","Type":"ContainerDied","Data":"331107ac0f1545b8649483e99bd4994ac003017b721505abae91935a23ee1505"} Jan 28 10:49:34 crc kubenswrapper[4735]: I0128 10:49:34.646146 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5mmtg/crc-debug-tqlhr" Jan 28 10:49:34 crc kubenswrapper[4735]: I0128 10:49:34.682095 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5mmtg/crc-debug-tqlhr"] Jan 28 10:49:34 crc kubenswrapper[4735]: I0128 10:49:34.694941 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5mmtg/crc-debug-tqlhr"] Jan 28 10:49:34 crc kubenswrapper[4735]: I0128 10:49:34.800022 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1a2758b-310b-4618-a13f-e3ee333d316d-host\") pod \"f1a2758b-310b-4618-a13f-e3ee333d316d\" (UID: \"f1a2758b-310b-4618-a13f-e3ee333d316d\") " Jan 28 10:49:34 crc kubenswrapper[4735]: I0128 10:49:34.800517 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1a2758b-310b-4618-a13f-e3ee333d316d-host" (OuterVolumeSpecName: "host") pod "f1a2758b-310b-4618-a13f-e3ee333d316d" (UID: "f1a2758b-310b-4618-a13f-e3ee333d316d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 10:49:34 crc kubenswrapper[4735]: I0128 10:49:34.800800 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkfd4\" (UniqueName: \"kubernetes.io/projected/f1a2758b-310b-4618-a13f-e3ee333d316d-kube-api-access-dkfd4\") pod \"f1a2758b-310b-4618-a13f-e3ee333d316d\" (UID: \"f1a2758b-310b-4618-a13f-e3ee333d316d\") " Jan 28 10:49:34 crc kubenswrapper[4735]: I0128 10:49:34.802418 4735 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1a2758b-310b-4618-a13f-e3ee333d316d-host\") on node \"crc\" DevicePath \"\"" Jan 28 10:49:34 crc kubenswrapper[4735]: I0128 10:49:34.806500 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1a2758b-310b-4618-a13f-e3ee333d316d-kube-api-access-dkfd4" (OuterVolumeSpecName: "kube-api-access-dkfd4") pod "f1a2758b-310b-4618-a13f-e3ee333d316d" (UID: "f1a2758b-310b-4618-a13f-e3ee333d316d"). InnerVolumeSpecName "kube-api-access-dkfd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:49:34 crc kubenswrapper[4735]: I0128 10:49:34.904291 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkfd4\" (UniqueName: \"kubernetes.io/projected/f1a2758b-310b-4618-a13f-e3ee333d316d-kube-api-access-dkfd4\") on node \"crc\" DevicePath \"\"" Jan 28 10:49:35 crc kubenswrapper[4735]: I0128 10:49:35.515835 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1a2758b-310b-4618-a13f-e3ee333d316d" path="/var/lib/kubelet/pods/f1a2758b-310b-4618-a13f-e3ee333d316d/volumes" Jan 28 10:49:35 crc kubenswrapper[4735]: I0128 10:49:35.539319 4735 scope.go:117] "RemoveContainer" containerID="331107ac0f1545b8649483e99bd4994ac003017b721505abae91935a23ee1505" Jan 28 10:49:35 crc kubenswrapper[4735]: I0128 10:49:35.539470 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5mmtg/crc-debug-tqlhr" Jan 28 10:49:35 crc kubenswrapper[4735]: I0128 10:49:35.861277 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5mmtg/crc-debug-f98r4"] Jan 28 10:49:35 crc kubenswrapper[4735]: E0128 10:49:35.862569 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a2758b-310b-4618-a13f-e3ee333d316d" containerName="container-00" Jan 28 10:49:35 crc kubenswrapper[4735]: I0128 10:49:35.862682 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a2758b-310b-4618-a13f-e3ee333d316d" containerName="container-00" Jan 28 10:49:35 crc kubenswrapper[4735]: I0128 10:49:35.863026 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1a2758b-310b-4618-a13f-e3ee333d316d" containerName="container-00" Jan 28 10:49:35 crc kubenswrapper[4735]: I0128 10:49:35.864186 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5mmtg/crc-debug-f98r4" Jan 28 10:49:36 crc kubenswrapper[4735]: I0128 10:49:36.024154 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aed92995-36dc-402e-bff9-4158d64a4b65-host\") pod \"crc-debug-f98r4\" (UID: \"aed92995-36dc-402e-bff9-4158d64a4b65\") " pod="openshift-must-gather-5mmtg/crc-debug-f98r4" Jan 28 10:49:36 crc kubenswrapper[4735]: I0128 10:49:36.024491 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmqm\" (UniqueName: \"kubernetes.io/projected/aed92995-36dc-402e-bff9-4158d64a4b65-kube-api-access-pdmqm\") pod \"crc-debug-f98r4\" (UID: \"aed92995-36dc-402e-bff9-4158d64a4b65\") " pod="openshift-must-gather-5mmtg/crc-debug-f98r4" Jan 28 10:49:36 crc kubenswrapper[4735]: I0128 10:49:36.125853 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aed92995-36dc-402e-bff9-4158d64a4b65-host\") pod \"crc-debug-f98r4\" (UID: \"aed92995-36dc-402e-bff9-4158d64a4b65\") " pod="openshift-must-gather-5mmtg/crc-debug-f98r4" Jan 28 10:49:36 crc kubenswrapper[4735]: I0128 10:49:36.125913 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdmqm\" (UniqueName: \"kubernetes.io/projected/aed92995-36dc-402e-bff9-4158d64a4b65-kube-api-access-pdmqm\") pod \"crc-debug-f98r4\" (UID: \"aed92995-36dc-402e-bff9-4158d64a4b65\") " pod="openshift-must-gather-5mmtg/crc-debug-f98r4" Jan 28 10:49:36 crc kubenswrapper[4735]: I0128 10:49:36.126320 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aed92995-36dc-402e-bff9-4158d64a4b65-host\") pod \"crc-debug-f98r4\" (UID: \"aed92995-36dc-402e-bff9-4158d64a4b65\") " pod="openshift-must-gather-5mmtg/crc-debug-f98r4" Jan 28 10:49:36 crc kubenswrapper[4735]: I0128 10:49:36.155984 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdmqm\" (UniqueName: \"kubernetes.io/projected/aed92995-36dc-402e-bff9-4158d64a4b65-kube-api-access-pdmqm\") pod \"crc-debug-f98r4\" (UID: \"aed92995-36dc-402e-bff9-4158d64a4b65\") " pod="openshift-must-gather-5mmtg/crc-debug-f98r4" Jan 28 10:49:36 crc kubenswrapper[4735]: I0128 10:49:36.184456 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5mmtg/crc-debug-f98r4" Jan 28 10:49:36 crc kubenswrapper[4735]: I0128 10:49:36.559447 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5mmtg/crc-debug-f98r4" event={"ID":"aed92995-36dc-402e-bff9-4158d64a4b65","Type":"ContainerStarted","Data":"24ef0da7c8fa292ce395ce714ca6ff205a5aeb06a71cf54bbfe910dd14103ab7"} Jan 28 10:49:36 crc kubenswrapper[4735]: I0128 10:49:36.559955 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5mmtg/crc-debug-f98r4" event={"ID":"aed92995-36dc-402e-bff9-4158d64a4b65","Type":"ContainerStarted","Data":"094091e52efa51463a1499520769fba586dbad596485a41eed6a1d11a59e78eb"} Jan 28 10:49:36 crc kubenswrapper[4735]: I0128 10:49:36.583709 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5mmtg/crc-debug-f98r4" podStartSLOduration=1.583688547 podStartE2EDuration="1.583688547s" podCreationTimestamp="2026-01-28 10:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 10:49:36.575461546 +0000 UTC m=+4029.725125919" watchObservedRunningTime="2026-01-28 10:49:36.583688547 +0000 UTC m=+4029.733352920" Jan 28 10:49:37 crc kubenswrapper[4735]: I0128 10:49:37.575953 4735 generic.go:334] "Generic (PLEG): container finished" podID="aed92995-36dc-402e-bff9-4158d64a4b65" containerID="24ef0da7c8fa292ce395ce714ca6ff205a5aeb06a71cf54bbfe910dd14103ab7" exitCode=0 Jan 28 10:49:37 crc kubenswrapper[4735]: I0128 10:49:37.576004 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5mmtg/crc-debug-f98r4" event={"ID":"aed92995-36dc-402e-bff9-4158d64a4b65","Type":"ContainerDied","Data":"24ef0da7c8fa292ce395ce714ca6ff205a5aeb06a71cf54bbfe910dd14103ab7"} Jan 28 10:49:39 crc kubenswrapper[4735]: I0128 10:49:39.478914 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5mmtg/crc-debug-f98r4" Jan 28 10:49:39 crc kubenswrapper[4735]: I0128 10:49:39.523362 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5mmtg/crc-debug-f98r4"] Jan 28 10:49:39 crc kubenswrapper[4735]: I0128 10:49:39.523433 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5mmtg/crc-debug-f98r4"] Jan 28 10:49:39 crc kubenswrapper[4735]: I0128 10:49:39.588995 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdmqm\" (UniqueName: \"kubernetes.io/projected/aed92995-36dc-402e-bff9-4158d64a4b65-kube-api-access-pdmqm\") pod \"aed92995-36dc-402e-bff9-4158d64a4b65\" (UID: \"aed92995-36dc-402e-bff9-4158d64a4b65\") " Jan 28 10:49:39 crc kubenswrapper[4735]: I0128 10:49:39.589675 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aed92995-36dc-402e-bff9-4158d64a4b65-host\") pod \"aed92995-36dc-402e-bff9-4158d64a4b65\" (UID: \"aed92995-36dc-402e-bff9-4158d64a4b65\") " Jan 28 10:49:39 crc kubenswrapper[4735]: I0128 10:49:39.589731 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aed92995-36dc-402e-bff9-4158d64a4b65-host" (OuterVolumeSpecName: "host") pod "aed92995-36dc-402e-bff9-4158d64a4b65" (UID: "aed92995-36dc-402e-bff9-4158d64a4b65"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 10:49:39 crc kubenswrapper[4735]: I0128 10:49:39.590261 4735 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aed92995-36dc-402e-bff9-4158d64a4b65-host\") on node \"crc\" DevicePath \"\"" Jan 28 10:49:39 crc kubenswrapper[4735]: I0128 10:49:39.593218 4735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="094091e52efa51463a1499520769fba586dbad596485a41eed6a1d11a59e78eb" Jan 28 10:49:39 crc kubenswrapper[4735]: I0128 10:49:39.593304 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5mmtg/crc-debug-f98r4" Jan 28 10:49:39 crc kubenswrapper[4735]: I0128 10:49:39.597919 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aed92995-36dc-402e-bff9-4158d64a4b65-kube-api-access-pdmqm" (OuterVolumeSpecName: "kube-api-access-pdmqm") pod "aed92995-36dc-402e-bff9-4158d64a4b65" (UID: "aed92995-36dc-402e-bff9-4158d64a4b65"). InnerVolumeSpecName "kube-api-access-pdmqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:49:39 crc kubenswrapper[4735]: I0128 10:49:39.691929 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdmqm\" (UniqueName: \"kubernetes.io/projected/aed92995-36dc-402e-bff9-4158d64a4b65-kube-api-access-pdmqm\") on node \"crc\" DevicePath \"\"" Jan 28 10:49:40 crc kubenswrapper[4735]: I0128 10:49:40.737852 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5mmtg/crc-debug-hzscj"] Jan 28 10:49:40 crc kubenswrapper[4735]: E0128 10:49:40.738851 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed92995-36dc-402e-bff9-4158d64a4b65" containerName="container-00" Jan 28 10:49:40 crc kubenswrapper[4735]: I0128 10:49:40.738867 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed92995-36dc-402e-bff9-4158d64a4b65" containerName="container-00" Jan 28 10:49:40 crc kubenswrapper[4735]: I0128 10:49:40.739054 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="aed92995-36dc-402e-bff9-4158d64a4b65" containerName="container-00" Jan 28 10:49:40 crc kubenswrapper[4735]: I0128 10:49:40.739681 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5mmtg/crc-debug-hzscj" Jan 28 10:49:40 crc kubenswrapper[4735]: I0128 10:49:40.821085 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5cjz\" (UniqueName: \"kubernetes.io/projected/cbf23d6d-ffc8-487c-b2f7-fb40b15948bc-kube-api-access-t5cjz\") pod \"crc-debug-hzscj\" (UID: \"cbf23d6d-ffc8-487c-b2f7-fb40b15948bc\") " pod="openshift-must-gather-5mmtg/crc-debug-hzscj" Jan 28 10:49:40 crc kubenswrapper[4735]: I0128 10:49:40.821168 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cbf23d6d-ffc8-487c-b2f7-fb40b15948bc-host\") pod \"crc-debug-hzscj\" (UID: \"cbf23d6d-ffc8-487c-b2f7-fb40b15948bc\") " pod="openshift-must-gather-5mmtg/crc-debug-hzscj" Jan 28 10:49:40 crc kubenswrapper[4735]: I0128 10:49:40.922246 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5cjz\" (UniqueName: \"kubernetes.io/projected/cbf23d6d-ffc8-487c-b2f7-fb40b15948bc-kube-api-access-t5cjz\") pod \"crc-debug-hzscj\" (UID: \"cbf23d6d-ffc8-487c-b2f7-fb40b15948bc\") " pod="openshift-must-gather-5mmtg/crc-debug-hzscj" Jan 28 10:49:40 crc kubenswrapper[4735]: I0128 10:49:40.922325 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cbf23d6d-ffc8-487c-b2f7-fb40b15948bc-host\") pod \"crc-debug-hzscj\" (UID: \"cbf23d6d-ffc8-487c-b2f7-fb40b15948bc\") " pod="openshift-must-gather-5mmtg/crc-debug-hzscj" Jan 28 10:49:40 crc kubenswrapper[4735]: I0128 10:49:40.922504 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cbf23d6d-ffc8-487c-b2f7-fb40b15948bc-host\") pod \"crc-debug-hzscj\" (UID: \"cbf23d6d-ffc8-487c-b2f7-fb40b15948bc\") " pod="openshift-must-gather-5mmtg/crc-debug-hzscj" Jan 28 10:49:41 crc kubenswrapper[4735]: I0128 10:49:41.363080 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5cjz\" (UniqueName: \"kubernetes.io/projected/cbf23d6d-ffc8-487c-b2f7-fb40b15948bc-kube-api-access-t5cjz\") pod \"crc-debug-hzscj\" (UID: \"cbf23d6d-ffc8-487c-b2f7-fb40b15948bc\") " pod="openshift-must-gather-5mmtg/crc-debug-hzscj" Jan 28 10:49:41 crc kubenswrapper[4735]: I0128 10:49:41.519291 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aed92995-36dc-402e-bff9-4158d64a4b65" path="/var/lib/kubelet/pods/aed92995-36dc-402e-bff9-4158d64a4b65/volumes" Jan 28 10:49:41 crc kubenswrapper[4735]: I0128 10:49:41.658825 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5mmtg/crc-debug-hzscj" Jan 28 10:49:42 crc kubenswrapper[4735]: I0128 10:49:42.652394 4735 generic.go:334] "Generic (PLEG): container finished" podID="cbf23d6d-ffc8-487c-b2f7-fb40b15948bc" containerID="c4fe4e095a4185dc8bd154205319c0d2cb7f034ef18ebddbce410a9c997632f0" exitCode=0 Jan 28 10:49:42 crc kubenswrapper[4735]: I0128 10:49:42.652497 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5mmtg/crc-debug-hzscj" event={"ID":"cbf23d6d-ffc8-487c-b2f7-fb40b15948bc","Type":"ContainerDied","Data":"c4fe4e095a4185dc8bd154205319c0d2cb7f034ef18ebddbce410a9c997632f0"} Jan 28 10:49:42 crc kubenswrapper[4735]: I0128 10:49:42.652898 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5mmtg/crc-debug-hzscj" event={"ID":"cbf23d6d-ffc8-487c-b2f7-fb40b15948bc","Type":"ContainerStarted","Data":"090fafbd331c63f63868bf75a549319010ad45a98a027947a21a0cc846f1caf9"} Jan 28 10:49:42 crc kubenswrapper[4735]: I0128 10:49:42.703053 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5mmtg/crc-debug-hzscj"] Jan 28 10:49:42 crc kubenswrapper[4735]: I0128 10:49:42.721950 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5mmtg/crc-debug-hzscj"] Jan 28 10:49:43 crc kubenswrapper[4735]: I0128 10:49:43.785730 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5mmtg/crc-debug-hzscj" Jan 28 10:49:43 crc kubenswrapper[4735]: I0128 10:49:43.884192 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cbf23d6d-ffc8-487c-b2f7-fb40b15948bc-host\") pod \"cbf23d6d-ffc8-487c-b2f7-fb40b15948bc\" (UID: \"cbf23d6d-ffc8-487c-b2f7-fb40b15948bc\") " Jan 28 10:49:43 crc kubenswrapper[4735]: I0128 10:49:43.884307 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbf23d6d-ffc8-487c-b2f7-fb40b15948bc-host" (OuterVolumeSpecName: "host") pod "cbf23d6d-ffc8-487c-b2f7-fb40b15948bc" (UID: "cbf23d6d-ffc8-487c-b2f7-fb40b15948bc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 10:49:43 crc kubenswrapper[4735]: I0128 10:49:43.884490 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5cjz\" (UniqueName: \"kubernetes.io/projected/cbf23d6d-ffc8-487c-b2f7-fb40b15948bc-kube-api-access-t5cjz\") pod \"cbf23d6d-ffc8-487c-b2f7-fb40b15948bc\" (UID: \"cbf23d6d-ffc8-487c-b2f7-fb40b15948bc\") " Jan 28 10:49:43 crc kubenswrapper[4735]: I0128 10:49:43.884998 4735 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cbf23d6d-ffc8-487c-b2f7-fb40b15948bc-host\") on node \"crc\" DevicePath \"\"" Jan 28 10:49:43 crc kubenswrapper[4735]: I0128 10:49:43.892911 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf23d6d-ffc8-487c-b2f7-fb40b15948bc-kube-api-access-t5cjz" (OuterVolumeSpecName: "kube-api-access-t5cjz") pod "cbf23d6d-ffc8-487c-b2f7-fb40b15948bc" (UID: "cbf23d6d-ffc8-487c-b2f7-fb40b15948bc"). InnerVolumeSpecName "kube-api-access-t5cjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:49:43 crc kubenswrapper[4735]: I0128 10:49:43.987088 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5cjz\" (UniqueName: \"kubernetes.io/projected/cbf23d6d-ffc8-487c-b2f7-fb40b15948bc-kube-api-access-t5cjz\") on node \"crc\" DevicePath \"\"" Jan 28 10:49:44 crc kubenswrapper[4735]: I0128 10:49:44.674198 4735 scope.go:117] "RemoveContainer" containerID="c4fe4e095a4185dc8bd154205319c0d2cb7f034ef18ebddbce410a9c997632f0" Jan 28 10:49:44 crc kubenswrapper[4735]: I0128 10:49:44.674277 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5mmtg/crc-debug-hzscj" Jan 28 10:49:45 crc kubenswrapper[4735]: I0128 10:49:45.514797 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf23d6d-ffc8-487c-b2f7-fb40b15948bc" path="/var/lib/kubelet/pods/cbf23d6d-ffc8-487c-b2f7-fb40b15948bc/volumes" Jan 28 10:50:05 crc kubenswrapper[4735]: I0128 10:50:05.430704 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:50:05 crc kubenswrapper[4735]: I0128 10:50:05.431518 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:50:15 crc kubenswrapper[4735]: I0128 10:50:15.167964 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f67d6d798-h9vrm_f38494ad-ec5a-4426-9d2c-6ef3c6ad877b/barbican-api/0.log" Jan 28 10:50:15 crc kubenswrapper[4735]: I0128 10:50:15.910901 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f67d6d798-h9vrm_f38494ad-ec5a-4426-9d2c-6ef3c6ad877b/barbican-api-log/0.log" Jan 28 10:50:15 crc kubenswrapper[4735]: I0128 10:50:15.949873 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-74c4c54ddb-zk4nc_039cbbea-550c-4d71-96b2-6e06fcbc8916/barbican-keystone-listener/0.log" Jan 28 10:50:16 crc kubenswrapper[4735]: I0128 10:50:16.023299 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-74c4c54ddb-zk4nc_039cbbea-550c-4d71-96b2-6e06fcbc8916/barbican-keystone-listener-log/0.log" Jan 28 10:50:16 crc kubenswrapper[4735]: I0128 10:50:16.130005 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-79b69bb499-htdgz_83266f31-73f1-4b43-94c1-ceeb9ee128b3/barbican-worker/0.log" Jan 28 10:50:16 crc kubenswrapper[4735]: I0128 10:50:16.169614 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-79b69bb499-htdgz_83266f31-73f1-4b43-94c1-ceeb9ee128b3/barbican-worker-log/0.log" Jan 28 10:50:16 crc kubenswrapper[4735]: I0128 10:50:16.365715 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_923bb257-f04f-4939-ad8a-10c4e6cc8aec/ceilometer-central-agent/0.log" Jan 28 10:50:16 crc kubenswrapper[4735]: I0128 10:50:16.367818 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xq5lb_7d008e88-e9b6-4c2e-befa-76e9ba1fbadd/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:50:16 crc kubenswrapper[4735]: I0128 10:50:16.470417 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_923bb257-f04f-4939-ad8a-10c4e6cc8aec/ceilometer-notification-agent/0.log" Jan 28 10:50:16 crc kubenswrapper[4735]: I0128 10:50:16.542356 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_923bb257-f04f-4939-ad8a-10c4e6cc8aec/proxy-httpd/0.log" Jan 28 10:50:16 crc kubenswrapper[4735]: I0128 10:50:16.547068 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_923bb257-f04f-4939-ad8a-10c4e6cc8aec/sg-core/0.log" Jan 28 10:50:16 crc kubenswrapper[4735]: I0128 10:50:16.998628 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3fab6043-5ed8-4885-94a2-f26820e9609c/cinder-api-log/0.log" Jan 28 10:50:17 crc kubenswrapper[4735]: I0128 10:50:17.000625 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3fab6043-5ed8-4885-94a2-f26820e9609c/cinder-api/0.log" Jan 28 10:50:17 crc kubenswrapper[4735]: I0128 10:50:17.249806 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f68a4b62-3797-4888-b052-e4da32bc4dab/probe/0.log" Jan 28 10:50:17 crc kubenswrapper[4735]: I0128 10:50:17.280903 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f68a4b62-3797-4888-b052-e4da32bc4dab/cinder-scheduler/0.log" Jan 28 10:50:17 crc kubenswrapper[4735]: I0128 10:50:17.288569 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-vp29s_d981f997-629e-4604-bd7a-4ae05efbeb3f/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:50:17 crc kubenswrapper[4735]: I0128 10:50:17.500449 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-r8b22_f77b490a-f09b-4e92-a529-69d81c87478e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:50:17 crc kubenswrapper[4735]: I0128 10:50:17.529899 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-dpqvs_05f5d1ad-e9d0-4565-be27-0ad67a068056/init/0.log" Jan 28 10:50:17 crc kubenswrapper[4735]: I0128 10:50:17.708310 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-dpqvs_05f5d1ad-e9d0-4565-be27-0ad67a068056/init/0.log" Jan 28 10:50:17 crc kubenswrapper[4735]: I0128 10:50:17.781255 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-6kf42_cce5eed7-eb00-4d7d-90c5-3f6d9cce70f1/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:50:17 crc kubenswrapper[4735]: I0128 10:50:17.826602 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-dpqvs_05f5d1ad-e9d0-4565-be27-0ad67a068056/dnsmasq-dns/0.log" Jan 28 10:50:17 crc kubenswrapper[4735]: I0128 10:50:17.957766 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6/glance-httpd/0.log" Jan 28 10:50:17 crc kubenswrapper[4735]: I0128 10:50:17.980818 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e9a4ff23-cbc6-4505-a6ae-f6094bfbeac6/glance-log/0.log" Jan 28 10:50:18 crc kubenswrapper[4735]: I0128 10:50:18.115905 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f1d08a0a-642a-404a-a47a-4f98f7d211b0/glance-httpd/0.log" Jan 28 10:50:18 crc kubenswrapper[4735]: I0128 10:50:18.138781 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f1d08a0a-642a-404a-a47a-4f98f7d211b0/glance-log/0.log" Jan 28 10:50:18 crc kubenswrapper[4735]: I0128 10:50:18.416339 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-55bcb7d5f8-xkjl6_9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79/horizon/0.log" Jan 28 10:50:18 crc kubenswrapper[4735]: I0128 10:50:18.497618 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-v46fq_e9acc786-77bc-4555-b7f4-f0a337175a3c/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:50:18 crc kubenswrapper[4735]: I0128 10:50:18.712215 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-c9rml_3d0c7cb0-8a7d-4956-8cc0-c61510e64590/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:50:18 crc kubenswrapper[4735]: I0128 10:50:18.723020 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-55bcb7d5f8-xkjl6_9d466bf5-2b29-47a0-b5f9-25e6f6d7bf79/horizon-log/0.log" Jan 28 10:50:18 crc kubenswrapper[4735]: I0128 10:50:18.914442 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7b16556d-dd01-471a-a057-84ba1afcc8de/kube-state-metrics/0.log" Jan 28 10:50:18 crc kubenswrapper[4735]: I0128 10:50:18.995225 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-69fccdbfc9-k2hlg_fe8fc9bd-9646-437c-818b-d34f1b751e19/keystone-api/0.log" Jan 28 10:50:19 crc kubenswrapper[4735]: I0128 10:50:19.136339 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-v6lq4_7c337230-3c7c-49aa-97cb-576327428c01/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:50:19 crc kubenswrapper[4735]: I0128 10:50:19.355872 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77978c49d9-227fz_585b2106-bf58-400c-8036-bd0220288f1f/neutron-api/0.log" Jan 28 10:50:19 crc kubenswrapper[4735]: I0128 10:50:19.404174 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77978c49d9-227fz_585b2106-bf58-400c-8036-bd0220288f1f/neutron-httpd/0.log" Jan 28 10:50:19 crc kubenswrapper[4735]: I0128 10:50:19.468475 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-m84lb_7202ad41-aa50-4012-a34d-df9d19e34a1d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:50:19 crc kubenswrapper[4735]: I0128 10:50:19.981531 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6c6bcae3-d25c-4076-91f5-cb8979303b25/nova-api-log/0.log" Jan 28 10:50:20 crc kubenswrapper[4735]: I0128 10:50:20.049604 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_dd58e731-439b-4ecf-b46a-2e978174ddf9/nova-cell0-conductor-conductor/0.log" Jan 28 10:50:20 crc kubenswrapper[4735]: I0128 10:50:20.266114 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_166e2c48-1a0f-4aad-bc51-26f82159a795/nova-cell1-conductor-conductor/0.log" Jan 28 10:50:20 crc kubenswrapper[4735]: I0128 10:50:20.395803 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_5d7ae116-a9d5-4dc3-bc01-3a8fb08af693/nova-cell1-novncproxy-novncproxy/0.log" Jan 28 10:50:20 crc kubenswrapper[4735]: I0128 10:50:20.415262 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6c6bcae3-d25c-4076-91f5-cb8979303b25/nova-api-api/0.log" Jan 28 10:50:20 crc kubenswrapper[4735]: I0128 10:50:20.486793 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-dzfkm_71c10727-e356-4d4a-a316-b48e9d5756bd/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:50:20 crc kubenswrapper[4735]: I0128 10:50:20.721617 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2f9188ec-6ee4-4120-8c9e-ab22d7a57546/nova-metadata-log/0.log" Jan 28 10:50:20 crc kubenswrapper[4735]: I0128 10:50:20.938243 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_beb7a8ad-5283-42ff-85af-30eb94a8b291/mysql-bootstrap/0.log" Jan 28 10:50:21 crc kubenswrapper[4735]: I0128 10:50:21.077776 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f3554759-653b-4a68-99e7-7f6cddcd5e5a/nova-scheduler-scheduler/0.log" Jan 28 10:50:21 crc kubenswrapper[4735]: I0128 10:50:21.157062 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_beb7a8ad-5283-42ff-85af-30eb94a8b291/galera/0.log" Jan 28 10:50:21 crc kubenswrapper[4735]: I0128 10:50:21.436096 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c58a2fe7-0339-42bb-96f9-084e2b9a3182/mysql-bootstrap/0.log" Jan 28 10:50:21 crc kubenswrapper[4735]: I0128 10:50:21.503606 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_beb7a8ad-5283-42ff-85af-30eb94a8b291/mysql-bootstrap/0.log" Jan 28 10:50:21 crc kubenswrapper[4735]: I0128 10:50:21.667052 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c58a2fe7-0339-42bb-96f9-084e2b9a3182/mysql-bootstrap/0.log" Jan 28 10:50:21 crc kubenswrapper[4735]: I0128 10:50:21.706093 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c58a2fe7-0339-42bb-96f9-084e2b9a3182/galera/0.log" Jan 28 10:50:21 crc kubenswrapper[4735]: I0128 10:50:21.880212 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9360da99-a521-416d-868b-69526589a422/openstackclient/0.log" Jan 28 10:50:21 crc kubenswrapper[4735]: I0128 10:50:21.979305 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-g77s5_d10ea80e-de8b-48b7-a31c-5e8941549930/openstack-network-exporter/0.log" Jan 28 10:50:22 crc kubenswrapper[4735]: I0128 10:50:22.080676 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2f9188ec-6ee4-4120-8c9e-ab22d7a57546/nova-metadata-metadata/0.log" Jan 28 10:50:22 crc kubenswrapper[4735]: I0128 10:50:22.216569 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lh9fz_f7a32b8d-eace-4a5f-b187-baf0e7d67479/ovsdb-server-init/0.log" Jan 28 10:50:22 crc kubenswrapper[4735]: I0128 10:50:22.375789 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lh9fz_f7a32b8d-eace-4a5f-b187-baf0e7d67479/ovsdb-server-init/0.log" Jan 28 10:50:22 crc kubenswrapper[4735]: I0128 10:50:22.410469 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lh9fz_f7a32b8d-eace-4a5f-b187-baf0e7d67479/ovsdb-server/0.log" Jan 28 10:50:22 crc kubenswrapper[4735]: I0128 10:50:22.455623 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lh9fz_f7a32b8d-eace-4a5f-b187-baf0e7d67479/ovs-vswitchd/0.log" Jan 28 10:50:22 crc kubenswrapper[4735]: I0128 10:50:22.629786 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-rdcgm_21c583b0-22fd-4ccf-907a-1bb119119830/ovn-controller/0.log" Jan 28 10:50:22 crc kubenswrapper[4735]: I0128 10:50:22.720525 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-4l92q_bbd83f26-98f3-47a9-9fa0-7225ba6b97ac/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:50:22 crc kubenswrapper[4735]: I0128 10:50:22.819821 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_09a13939-6aac-486c-a541-5c3683bff770/openstack-network-exporter/0.log" Jan 28 10:50:22 crc kubenswrapper[4735]: I0128 10:50:22.858585 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_09a13939-6aac-486c-a541-5c3683bff770/ovn-northd/0.log" Jan 28 10:50:23 crc kubenswrapper[4735]: I0128 10:50:23.004343 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4362d077-82db-43b3-8df1-b60d2d34028c/openstack-network-exporter/0.log" Jan 28 10:50:23 crc kubenswrapper[4735]: I0128 10:50:23.042788 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4362d077-82db-43b3-8df1-b60d2d34028c/ovsdbserver-nb/0.log" Jan 28 10:50:23 crc kubenswrapper[4735]: I0128 10:50:23.163940 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_dc607c73-caa4-4986-90b6-1b101b32970b/openstack-network-exporter/0.log" Jan 28 10:50:23 crc kubenswrapper[4735]: I0128 10:50:23.716943 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_dc607c73-caa4-4986-90b6-1b101b32970b/ovsdbserver-sb/0.log" Jan 28 10:50:23 crc kubenswrapper[4735]: I0128 10:50:23.798155 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7bbdbbfb48-tj5mr_23c7392e-2cdc-4e9a-9a25-9f867a675d48/placement-api/0.log" Jan 28 10:50:23 crc kubenswrapper[4735]: I0128 10:50:23.878522 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7bbdbbfb48-tj5mr_23c7392e-2cdc-4e9a-9a25-9f867a675d48/placement-log/0.log" Jan 28 10:50:23 crc kubenswrapper[4735]: I0128 10:50:23.999404 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5514390d-3d1c-45af-b063-0ba631c09e43/setup-container/0.log" Jan 28 10:50:24 crc kubenswrapper[4735]: I0128 10:50:24.231983 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5514390d-3d1c-45af-b063-0ba631c09e43/setup-container/0.log" Jan 28 10:50:24 crc kubenswrapper[4735]: I0128 10:50:24.246938 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5514390d-3d1c-45af-b063-0ba631c09e43/rabbitmq/0.log" Jan 28 10:50:24 crc kubenswrapper[4735]: I0128 10:50:24.343981 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bbefc8b9-3deb-4ae1-a798-1e95f1014380/setup-container/0.log" Jan 28 10:50:24 crc kubenswrapper[4735]: I0128 10:50:24.555183 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-j2njc_fa4fa749-d4d1-43bd-9ec1-0a52fb7458d1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:50:24 crc kubenswrapper[4735]: I0128 10:50:24.594696 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bbefc8b9-3deb-4ae1-a798-1e95f1014380/rabbitmq/0.log" Jan 28 10:50:24 crc kubenswrapper[4735]: I0128 10:50:24.606965 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bbefc8b9-3deb-4ae1-a798-1e95f1014380/setup-container/0.log" Jan 28 10:50:24 crc kubenswrapper[4735]: I0128 10:50:24.764632 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-l4f99_84ad913f-0b4f-4644-8aeb-b85bfc1da4ee/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:50:25 crc kubenswrapper[4735]: I0128 10:50:25.518873 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-p7fn9_2e5e3a3d-adbc-42b9-a436-a3b565cfdb70/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:50:25 crc kubenswrapper[4735]: I0128 10:50:25.522865 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-tggpz_1b4ed502-7077-42e3-b1a5-a70f4d8c2aab/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:50:25 crc kubenswrapper[4735]: I0128 10:50:25.695694 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-5tc5h_812becf2-d241-43e6-b9e4-58263bbfb115/ssh-known-hosts-edpm-deployment/0.log" Jan 28 10:50:25 crc kubenswrapper[4735]: I0128 10:50:25.863264 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-79b9899549-8qxf5_681a40ab-fe34-4825-8336-3b8048fc04b7/proxy-server/0.log" Jan 28 10:50:25 crc kubenswrapper[4735]: I0128 10:50:25.929558 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-79b9899549-8qxf5_681a40ab-fe34-4825-8336-3b8048fc04b7/proxy-httpd/0.log" Jan 28 10:50:26 crc kubenswrapper[4735]: I0128 10:50:26.074351 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-h9q76_8f9e278a-14f9-4162-acea-69717d616573/swift-ring-rebalance/0.log" Jan 28 10:50:26 crc kubenswrapper[4735]: I0128 10:50:26.143472 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/account-auditor/0.log" Jan 28 10:50:26 crc kubenswrapper[4735]: I0128 10:50:26.163112 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/account-reaper/0.log" Jan 28 10:50:26 crc kubenswrapper[4735]: I0128 10:50:26.362296 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/account-replicator/0.log" Jan 28 10:50:26 crc kubenswrapper[4735]: I0128 10:50:26.372263 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/account-server/0.log" Jan 28 10:50:26 crc kubenswrapper[4735]: I0128 10:50:26.420851 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/container-auditor/0.log" Jan 28 10:50:26 crc kubenswrapper[4735]: I0128 10:50:26.442998 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/container-replicator/0.log" Jan 28 10:50:26 crc kubenswrapper[4735]: I0128 10:50:26.567388 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/container-server/0.log" Jan 28 10:50:26 crc kubenswrapper[4735]: I0128 10:50:26.642046 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/object-auditor/0.log" Jan 28 10:50:26 crc kubenswrapper[4735]: I0128 10:50:26.668974 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/container-updater/0.log" Jan 28 10:50:26 crc kubenswrapper[4735]: I0128 10:50:26.721194 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/object-expirer/0.log" Jan 28 10:50:26 crc kubenswrapper[4735]: I0128 10:50:26.806393 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/object-replicator/0.log" Jan 28 10:50:26 crc kubenswrapper[4735]: I0128 10:50:26.821362 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/object-server/0.log" Jan 28 10:50:26 crc kubenswrapper[4735]: I0128 10:50:26.873040 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/object-updater/0.log" Jan 28 10:50:26 crc kubenswrapper[4735]: I0128 10:50:26.972429 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/rsync/0.log" Jan 28 10:50:27 crc kubenswrapper[4735]: I0128 10:50:27.047179 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_718744c9-5139-48b6-bf2c-7a29489d0fa7/swift-recon-cron/0.log" Jan 28 10:50:27 crc kubenswrapper[4735]: I0128 10:50:27.215611 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-f24zk_05947d1f-31c3-4046-b5e5-c70249faa781/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:50:27 crc kubenswrapper[4735]: I0128 10:50:27.273572 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d5faba03-f0db-452a-bf0c-271d930e27b5/tempest-tests-tempest-tests-runner/0.log" Jan 28 10:50:27 crc kubenswrapper[4735]: I0128 10:50:27.409561 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_1d4b1627-ee2b-44ec-b1e8-4909d05f4d03/test-operator-logs-container/0.log" Jan 28 10:50:27 crc kubenswrapper[4735]: I0128 10:50:27.488601 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6nxp8_eca0937f-f5d2-4682-9ac0-92041c4fac82/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 10:50:33 crc kubenswrapper[4735]: I0128 10:50:33.873450 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f5fnx"] Jan 28 10:50:33 crc kubenswrapper[4735]: E0128 10:50:33.875934 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf23d6d-ffc8-487c-b2f7-fb40b15948bc" containerName="container-00" Jan 28 10:50:33 crc kubenswrapper[4735]: I0128 10:50:33.876057 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf23d6d-ffc8-487c-b2f7-fb40b15948bc" containerName="container-00" Jan 28 10:50:33 crc kubenswrapper[4735]: I0128 10:50:33.876414 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf23d6d-ffc8-487c-b2f7-fb40b15948bc" containerName="container-00" Jan 28 10:50:33 crc kubenswrapper[4735]: I0128 10:50:33.878240 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5fnx" Jan 28 10:50:33 crc kubenswrapper[4735]: I0128 10:50:33.896389 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f5fnx"] Jan 28 10:50:34 crc kubenswrapper[4735]: I0128 10:50:34.027439 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e7b1c07-b070-43eb-9226-75e1eafe1d30-utilities\") pod \"community-operators-f5fnx\" (UID: \"7e7b1c07-b070-43eb-9226-75e1eafe1d30\") " pod="openshift-marketplace/community-operators-f5fnx" Jan 28 10:50:34 crc kubenswrapper[4735]: I0128 10:50:34.027485 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e7b1c07-b070-43eb-9226-75e1eafe1d30-catalog-content\") pod \"community-operators-f5fnx\" (UID: \"7e7b1c07-b070-43eb-9226-75e1eafe1d30\") " pod="openshift-marketplace/community-operators-f5fnx" Jan 28 10:50:34 crc kubenswrapper[4735]: I0128 10:50:34.027607 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j4wg\" (UniqueName: \"kubernetes.io/projected/7e7b1c07-b070-43eb-9226-75e1eafe1d30-kube-api-access-9j4wg\") pod \"community-operators-f5fnx\" (UID: \"7e7b1c07-b070-43eb-9226-75e1eafe1d30\") " pod="openshift-marketplace/community-operators-f5fnx" Jan 28 10:50:34 crc kubenswrapper[4735]: I0128 10:50:34.128946 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e7b1c07-b070-43eb-9226-75e1eafe1d30-utilities\") pod \"community-operators-f5fnx\" (UID: \"7e7b1c07-b070-43eb-9226-75e1eafe1d30\") " pod="openshift-marketplace/community-operators-f5fnx" Jan 28 10:50:34 crc kubenswrapper[4735]: I0128 10:50:34.128984 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e7b1c07-b070-43eb-9226-75e1eafe1d30-catalog-content\") pod \"community-operators-f5fnx\" (UID: \"7e7b1c07-b070-43eb-9226-75e1eafe1d30\") " pod="openshift-marketplace/community-operators-f5fnx" Jan 28 10:50:34 crc kubenswrapper[4735]: I0128 10:50:34.129035 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j4wg\" (UniqueName: \"kubernetes.io/projected/7e7b1c07-b070-43eb-9226-75e1eafe1d30-kube-api-access-9j4wg\") pod \"community-operators-f5fnx\" (UID: \"7e7b1c07-b070-43eb-9226-75e1eafe1d30\") " pod="openshift-marketplace/community-operators-f5fnx" Jan 28 10:50:34 crc kubenswrapper[4735]: I0128 10:50:34.129376 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e7b1c07-b070-43eb-9226-75e1eafe1d30-utilities\") pod \"community-operators-f5fnx\" (UID: \"7e7b1c07-b070-43eb-9226-75e1eafe1d30\") " pod="openshift-marketplace/community-operators-f5fnx" Jan 28 10:50:34 crc kubenswrapper[4735]: I0128 10:50:34.129682 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e7b1c07-b070-43eb-9226-75e1eafe1d30-catalog-content\") pod \"community-operators-f5fnx\" (UID: \"7e7b1c07-b070-43eb-9226-75e1eafe1d30\") " pod="openshift-marketplace/community-operators-f5fnx" Jan 28 10:50:34 crc kubenswrapper[4735]: I0128 10:50:34.151117 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j4wg\" (UniqueName: \"kubernetes.io/projected/7e7b1c07-b070-43eb-9226-75e1eafe1d30-kube-api-access-9j4wg\") pod \"community-operators-f5fnx\" (UID: \"7e7b1c07-b070-43eb-9226-75e1eafe1d30\") " pod="openshift-marketplace/community-operators-f5fnx" Jan 28 10:50:34 crc kubenswrapper[4735]: I0128 10:50:34.202074 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5fnx" Jan 28 10:50:35 crc kubenswrapper[4735]: I0128 10:50:34.696789 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f5fnx"] Jan 28 10:50:35 crc kubenswrapper[4735]: I0128 10:50:35.113016 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5fnx" event={"ID":"7e7b1c07-b070-43eb-9226-75e1eafe1d30","Type":"ContainerStarted","Data":"ee4a8cb31bac247d49dd5785fe3286434840b55d4de8c9b6aa931f5d840d4580"} Jan 28 10:50:35 crc kubenswrapper[4735]: I0128 10:50:35.429312 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:50:35 crc kubenswrapper[4735]: I0128 10:50:35.429860 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:50:36 crc kubenswrapper[4735]: I0128 10:50:36.125183 4735 generic.go:334] "Generic (PLEG): container finished" podID="7e7b1c07-b070-43eb-9226-75e1eafe1d30" containerID="3d44b760d88693ae8c82e95d164e36dbf6cfaa05f3d34b7a488e51d7a886f34b" exitCode=0 Jan 28 10:50:36 crc kubenswrapper[4735]: I0128 10:50:36.125232 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5fnx" event={"ID":"7e7b1c07-b070-43eb-9226-75e1eafe1d30","Type":"ContainerDied","Data":"3d44b760d88693ae8c82e95d164e36dbf6cfaa05f3d34b7a488e51d7a886f34b"} Jan 28 10:50:37 crc kubenswrapper[4735]: I0128 10:50:37.134843 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5fnx" event={"ID":"7e7b1c07-b070-43eb-9226-75e1eafe1d30","Type":"ContainerStarted","Data":"3c8ad3db8757af7d7a6d26da700a061919ceff4cb099e8001db5d2496c445ea0"} Jan 28 10:50:37 crc kubenswrapper[4735]: I0128 10:50:37.379523 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_c478e32a-3e57-49eb-8bd0-cff353ad0f56/memcached/0.log" Jan 28 10:50:38 crc kubenswrapper[4735]: I0128 10:50:38.143090 4735 generic.go:334] "Generic (PLEG): container finished" podID="7e7b1c07-b070-43eb-9226-75e1eafe1d30" containerID="3c8ad3db8757af7d7a6d26da700a061919ceff4cb099e8001db5d2496c445ea0" exitCode=0 Jan 28 10:50:38 crc kubenswrapper[4735]: I0128 10:50:38.143155 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5fnx" event={"ID":"7e7b1c07-b070-43eb-9226-75e1eafe1d30","Type":"ContainerDied","Data":"3c8ad3db8757af7d7a6d26da700a061919ceff4cb099e8001db5d2496c445ea0"} Jan 28 10:50:39 crc kubenswrapper[4735]: I0128 10:50:39.157320 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5fnx" event={"ID":"7e7b1c07-b070-43eb-9226-75e1eafe1d30","Type":"ContainerStarted","Data":"d7ba298b6e7708e927fa1d7a9f9d8a8dbca632eaa0b8ac511898eec0ebd6ca03"} Jan 28 10:50:39 crc kubenswrapper[4735]: I0128 10:50:39.185405 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f5fnx" podStartSLOduration=3.479079957 podStartE2EDuration="6.185387071s" podCreationTimestamp="2026-01-28 10:50:33 +0000 UTC" firstStartedPulling="2026-01-28 10:50:36.127301812 +0000 UTC m=+4089.276966185" lastFinishedPulling="2026-01-28 10:50:38.833608916 +0000 UTC m=+4091.983273299" observedRunningTime="2026-01-28 10:50:39.176470203 +0000 UTC m=+4092.326134576" watchObservedRunningTime="2026-01-28 10:50:39.185387071 +0000 UTC m=+4092.335051444" Jan 28 10:50:44 crc kubenswrapper[4735]: I0128 10:50:44.203277 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f5fnx" Jan 28 10:50:44 crc kubenswrapper[4735]: I0128 10:50:44.203935 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f5fnx" Jan 28 10:50:44 crc kubenswrapper[4735]: I0128 10:50:44.269287 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f5fnx" Jan 28 10:50:45 crc kubenswrapper[4735]: I0128 10:50:45.815096 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f5fnx" Jan 28 10:50:45 crc kubenswrapper[4735]: I0128 10:50:45.877459 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f5fnx"] Jan 28 10:50:47 crc kubenswrapper[4735]: I0128 10:50:47.233445 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f5fnx" podUID="7e7b1c07-b070-43eb-9226-75e1eafe1d30" containerName="registry-server" containerID="cri-o://d7ba298b6e7708e927fa1d7a9f9d8a8dbca632eaa0b8ac511898eec0ebd6ca03" gracePeriod=2 Jan 28 10:50:47 crc kubenswrapper[4735]: I0128 10:50:47.738637 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5fnx" Jan 28 10:50:47 crc kubenswrapper[4735]: I0128 10:50:47.827891 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e7b1c07-b070-43eb-9226-75e1eafe1d30-catalog-content\") pod \"7e7b1c07-b070-43eb-9226-75e1eafe1d30\" (UID: \"7e7b1c07-b070-43eb-9226-75e1eafe1d30\") " Jan 28 10:50:47 crc kubenswrapper[4735]: I0128 10:50:47.828098 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e7b1c07-b070-43eb-9226-75e1eafe1d30-utilities\") pod \"7e7b1c07-b070-43eb-9226-75e1eafe1d30\" (UID: \"7e7b1c07-b070-43eb-9226-75e1eafe1d30\") " Jan 28 10:50:47 crc kubenswrapper[4735]: I0128 10:50:47.828132 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j4wg\" (UniqueName: \"kubernetes.io/projected/7e7b1c07-b070-43eb-9226-75e1eafe1d30-kube-api-access-9j4wg\") pod \"7e7b1c07-b070-43eb-9226-75e1eafe1d30\" (UID: \"7e7b1c07-b070-43eb-9226-75e1eafe1d30\") " Jan 28 10:50:47 crc kubenswrapper[4735]: I0128 10:50:47.828888 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e7b1c07-b070-43eb-9226-75e1eafe1d30-utilities" (OuterVolumeSpecName: "utilities") pod "7e7b1c07-b070-43eb-9226-75e1eafe1d30" (UID: "7e7b1c07-b070-43eb-9226-75e1eafe1d30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:50:47 crc kubenswrapper[4735]: I0128 10:50:47.835169 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e7b1c07-b070-43eb-9226-75e1eafe1d30-kube-api-access-9j4wg" (OuterVolumeSpecName: "kube-api-access-9j4wg") pod "7e7b1c07-b070-43eb-9226-75e1eafe1d30" (UID: "7e7b1c07-b070-43eb-9226-75e1eafe1d30"). InnerVolumeSpecName "kube-api-access-9j4wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:50:47 crc kubenswrapper[4735]: I0128 10:50:47.882437 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e7b1c07-b070-43eb-9226-75e1eafe1d30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e7b1c07-b070-43eb-9226-75e1eafe1d30" (UID: "7e7b1c07-b070-43eb-9226-75e1eafe1d30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:50:47 crc kubenswrapper[4735]: I0128 10:50:47.931789 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e7b1c07-b070-43eb-9226-75e1eafe1d30-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:50:47 crc kubenswrapper[4735]: I0128 10:50:47.931826 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j4wg\" (UniqueName: \"kubernetes.io/projected/7e7b1c07-b070-43eb-9226-75e1eafe1d30-kube-api-access-9j4wg\") on node \"crc\" DevicePath \"\"" Jan 28 10:50:47 crc kubenswrapper[4735]: I0128 10:50:47.931841 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e7b1c07-b070-43eb-9226-75e1eafe1d30-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:50:48 crc kubenswrapper[4735]: I0128 10:50:48.243954 4735 generic.go:334] "Generic (PLEG): container finished" podID="7e7b1c07-b070-43eb-9226-75e1eafe1d30" containerID="d7ba298b6e7708e927fa1d7a9f9d8a8dbca632eaa0b8ac511898eec0ebd6ca03" exitCode=0 Jan 28 10:50:48 crc kubenswrapper[4735]: I0128 10:50:48.244017 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5fnx" event={"ID":"7e7b1c07-b070-43eb-9226-75e1eafe1d30","Type":"ContainerDied","Data":"d7ba298b6e7708e927fa1d7a9f9d8a8dbca632eaa0b8ac511898eec0ebd6ca03"} Jan 28 10:50:48 crc kubenswrapper[4735]: I0128 10:50:48.244073 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5fnx" event={"ID":"7e7b1c07-b070-43eb-9226-75e1eafe1d30","Type":"ContainerDied","Data":"ee4a8cb31bac247d49dd5785fe3286434840b55d4de8c9b6aa931f5d840d4580"} Jan 28 10:50:48 crc kubenswrapper[4735]: I0128 10:50:48.244093 4735 scope.go:117] "RemoveContainer" containerID="d7ba298b6e7708e927fa1d7a9f9d8a8dbca632eaa0b8ac511898eec0ebd6ca03" Jan 28 10:50:48 crc kubenswrapper[4735]: I0128 10:50:48.244044 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5fnx" Jan 28 10:50:48 crc kubenswrapper[4735]: I0128 10:50:48.269782 4735 scope.go:117] "RemoveContainer" containerID="3c8ad3db8757af7d7a6d26da700a061919ceff4cb099e8001db5d2496c445ea0" Jan 28 10:50:48 crc kubenswrapper[4735]: I0128 10:50:48.289013 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f5fnx"] Jan 28 10:50:48 crc kubenswrapper[4735]: I0128 10:50:48.303979 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f5fnx"] Jan 28 10:50:48 crc kubenswrapper[4735]: I0128 10:50:48.304016 4735 scope.go:117] "RemoveContainer" containerID="3d44b760d88693ae8c82e95d164e36dbf6cfaa05f3d34b7a488e51d7a886f34b" Jan 28 10:50:48 crc kubenswrapper[4735]: I0128 10:50:48.337284 4735 scope.go:117] "RemoveContainer" containerID="d7ba298b6e7708e927fa1d7a9f9d8a8dbca632eaa0b8ac511898eec0ebd6ca03" Jan 28 10:50:48 crc kubenswrapper[4735]: E0128 10:50:48.337878 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7ba298b6e7708e927fa1d7a9f9d8a8dbca632eaa0b8ac511898eec0ebd6ca03\": container with ID starting with d7ba298b6e7708e927fa1d7a9f9d8a8dbca632eaa0b8ac511898eec0ebd6ca03 not found: ID does not exist" containerID="d7ba298b6e7708e927fa1d7a9f9d8a8dbca632eaa0b8ac511898eec0ebd6ca03" Jan 28 10:50:48 crc kubenswrapper[4735]: I0128 10:50:48.337999 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7ba298b6e7708e927fa1d7a9f9d8a8dbca632eaa0b8ac511898eec0ebd6ca03"} err="failed to get container status \"d7ba298b6e7708e927fa1d7a9f9d8a8dbca632eaa0b8ac511898eec0ebd6ca03\": rpc error: code = NotFound desc = could not find container \"d7ba298b6e7708e927fa1d7a9f9d8a8dbca632eaa0b8ac511898eec0ebd6ca03\": container with ID starting with d7ba298b6e7708e927fa1d7a9f9d8a8dbca632eaa0b8ac511898eec0ebd6ca03 not found: ID does not exist" Jan 28 10:50:48 crc kubenswrapper[4735]: I0128 10:50:48.338107 4735 scope.go:117] "RemoveContainer" containerID="3c8ad3db8757af7d7a6d26da700a061919ceff4cb099e8001db5d2496c445ea0" Jan 28 10:50:48 crc kubenswrapper[4735]: E0128 10:50:48.338591 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c8ad3db8757af7d7a6d26da700a061919ceff4cb099e8001db5d2496c445ea0\": container with ID starting with 3c8ad3db8757af7d7a6d26da700a061919ceff4cb099e8001db5d2496c445ea0 not found: ID does not exist" containerID="3c8ad3db8757af7d7a6d26da700a061919ceff4cb099e8001db5d2496c445ea0" Jan 28 10:50:48 crc kubenswrapper[4735]: I0128 10:50:48.338682 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c8ad3db8757af7d7a6d26da700a061919ceff4cb099e8001db5d2496c445ea0"} err="failed to get container status \"3c8ad3db8757af7d7a6d26da700a061919ceff4cb099e8001db5d2496c445ea0\": rpc error: code = NotFound desc = could not find container \"3c8ad3db8757af7d7a6d26da700a061919ceff4cb099e8001db5d2496c445ea0\": container with ID starting with 3c8ad3db8757af7d7a6d26da700a061919ceff4cb099e8001db5d2496c445ea0 not found: ID does not exist" Jan 28 10:50:48 crc kubenswrapper[4735]: I0128 10:50:48.338780 4735 scope.go:117] "RemoveContainer" containerID="3d44b760d88693ae8c82e95d164e36dbf6cfaa05f3d34b7a488e51d7a886f34b" Jan 28 10:50:48 crc kubenswrapper[4735]: E0128 10:50:48.339101 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d44b760d88693ae8c82e95d164e36dbf6cfaa05f3d34b7a488e51d7a886f34b\": container with ID starting with 3d44b760d88693ae8c82e95d164e36dbf6cfaa05f3d34b7a488e51d7a886f34b not found: ID does not exist" containerID="3d44b760d88693ae8c82e95d164e36dbf6cfaa05f3d34b7a488e51d7a886f34b" Jan 28 10:50:48 crc kubenswrapper[4735]: I0128 10:50:48.339202 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d44b760d88693ae8c82e95d164e36dbf6cfaa05f3d34b7a488e51d7a886f34b"} err="failed to get container status \"3d44b760d88693ae8c82e95d164e36dbf6cfaa05f3d34b7a488e51d7a886f34b\": rpc error: code = NotFound desc = could not find container \"3d44b760d88693ae8c82e95d164e36dbf6cfaa05f3d34b7a488e51d7a886f34b\": container with ID starting with 3d44b760d88693ae8c82e95d164e36dbf6cfaa05f3d34b7a488e51d7a886f34b not found: ID does not exist" Jan 28 10:50:49 crc kubenswrapper[4735]: I0128 10:50:49.527006 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e7b1c07-b070-43eb-9226-75e1eafe1d30" path="/var/lib/kubelet/pods/7e7b1c07-b070-43eb-9226-75e1eafe1d30/volumes" Jan 28 10:50:54 crc kubenswrapper[4735]: I0128 10:50:54.191234 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf_5ec63835-f891-4098-a2b0-348948b4a3ec/util/0.log" Jan 28 10:50:54 crc kubenswrapper[4735]: I0128 10:50:54.393878 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf_5ec63835-f891-4098-a2b0-348948b4a3ec/util/0.log" Jan 28 10:50:54 crc kubenswrapper[4735]: I0128 10:50:54.397787 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf_5ec63835-f891-4098-a2b0-348948b4a3ec/pull/0.log" Jan 28 10:50:54 crc kubenswrapper[4735]: I0128 10:50:54.436873 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf_5ec63835-f891-4098-a2b0-348948b4a3ec/pull/0.log" Jan 28 10:50:54 crc kubenswrapper[4735]: I0128 10:50:54.567102 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf_5ec63835-f891-4098-a2b0-348948b4a3ec/pull/0.log" Jan 28 10:50:54 crc kubenswrapper[4735]: I0128 10:50:54.572038 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf_5ec63835-f891-4098-a2b0-348948b4a3ec/util/0.log" Jan 28 10:50:54 crc kubenswrapper[4735]: I0128 10:50:54.602798 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_143211e634e0d7ada6bf68b21b47a4971b7b466d19f214236b41ce621fx57bf_5ec63835-f891-4098-a2b0-348948b4a3ec/extract/0.log" Jan 28 10:50:55 crc kubenswrapper[4735]: I0128 10:50:55.422250 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-65ff799cfd-vxtqg_60a1f78e-f6c9-4335-8545-533014df520c/manager/0.log" Jan 28 10:50:55 crc kubenswrapper[4735]: I0128 10:50:55.434423 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-pxbwq_4eee1f38-6076-431a-83c4-f6ac2c91bf2b/manager/0.log" Jan 28 10:50:55 crc kubenswrapper[4735]: I0128 10:50:55.556414 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-vlxzd_ad40699a-df4d-4aef-ba9e-ec411784138f/manager/0.log" Jan 28 10:50:55 crc kubenswrapper[4735]: I0128 10:50:55.656932 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-5qn45_23c42560-8419-496c-8b21-ebb1315a7e4d/manager/0.log" Jan 28 10:50:55 crc kubenswrapper[4735]: I0128 10:50:55.771123 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-575ffb885b-77khw_ab02d2a5-6cf8-4f01-8381-2ea58829a846/manager/0.log" Jan 28 10:50:55 crc kubenswrapper[4735]: I0128 10:50:55.924875 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-zwpzq_9ada3a05-5552-4191-ac59-bbeeb2621780/manager/0.log" Jan 28 10:50:56 crc kubenswrapper[4735]: I0128 10:50:56.187216 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-gtbjm_75e33671-8cc3-4470-845b-b45c777c3e9b/manager/0.log" Jan 28 10:50:56 crc kubenswrapper[4735]: I0128 10:50:56.280540 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-mz4t7_8139f6ed-d10a-4966-a7d2-a5a9412bd235/manager/0.log" Jan 28 10:50:56 crc kubenswrapper[4735]: I0128 10:50:56.374872 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-b2ths_fa2276dc-647b-427e-9ff5-c25f1d351ee6/manager/0.log" Jan 28 10:50:56 crc kubenswrapper[4735]: I0128 10:50:56.427433 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-lt6sx_dbfb38f5-9834-44f5-a84d-f17bb28538f6/manager/0.log" Jan 28 10:50:57 crc kubenswrapper[4735]: I0128 10:50:57.070846 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-d2hvr_7ee4898b-8413-497c-b0f8-6003e75d1bd4/manager/0.log" Jan 28 10:50:57 crc kubenswrapper[4735]: I0128 10:50:57.119940 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-xds2t_99934609-5407-442a-a887-ba47b516506a/manager/0.log" Jan 28 10:50:57 crc kubenswrapper[4735]: I0128 10:50:57.305696 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-ddcbfd695-fp94q_d3cf4444-7ac3-4c53-9b08-f2ad2d5a7c17/manager/0.log" Jan 28 10:50:57 crc kubenswrapper[4735]: I0128 10:50:57.336977 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-94dd99d7d-6pbvz_6300756c-e96b-40d5-ab3d-a044cb39016b/manager/0.log" Jan 28 10:50:57 crc kubenswrapper[4735]: I0128 10:50:57.486723 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854cs2sp_49ce66b9-31db-4d8d-adc4-f7e178cdceec/manager/0.log" Jan 28 10:50:57 crc kubenswrapper[4735]: I0128 10:50:57.612215 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-745854f465-rgjxp_eb187ad2-0ae5-41e5-b649-4a55cdfb6aef/operator/0.log" Jan 28 10:50:57 crc kubenswrapper[4735]: I0128 10:50:57.815224 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nmkt5_f6eacb6b-9066-44c5-a483-02274a4e3884/registry-server/0.log" Jan 28 10:50:58 crc kubenswrapper[4735]: I0128 10:50:58.105889 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-8svwm_b91503a2-0819-4923-a8c2-63f35bded503/manager/0.log" Jan 28 10:50:58 crc kubenswrapper[4735]: I0128 10:50:58.117951 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-xvd9h_c0301ee4-c19c-425e-9ba4-f66e2b405835/manager/0.log" Jan 28 10:50:58 crc kubenswrapper[4735]: I0128 10:50:58.284661 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-jbc6s_d401e4f7-afe6-4095-86cc-f43f43130c4b/operator/0.log" Jan 28 10:50:58 crc kubenswrapper[4735]: I0128 10:50:58.415022 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-2hkvv_24c4a2af-7bb3-4a12-8279-58f7dda0ef7e/manager/0.log" Jan 28 10:50:58 crc kubenswrapper[4735]: I0128 10:50:58.595932 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-pcp57_4699fc29-624c-482c-9e68-e71bdf957701/manager/0.log" Jan 28 10:50:58 crc kubenswrapper[4735]: I0128 10:50:58.627566 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-bcnkf_351faed5-c166-4766-99af-3c5b728b1f78/manager/0.log" Jan 28 10:50:58 crc kubenswrapper[4735]: I0128 10:50:58.767305 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7f5f655bfc-9jbp7_77910ebb-c49c-432e-b224-4fbafcaf7853/manager/0.log" Jan 28 10:50:58 crc kubenswrapper[4735]: I0128 10:50:58.781953 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-767b8bc766-cmtww_733ac7cb-682a-4a00-b6eb-404b3010295c/manager/0.log" Jan 28 10:51:05 crc kubenswrapper[4735]: I0128 10:51:05.429798 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:51:05 crc kubenswrapper[4735]: I0128 10:51:05.430375 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:51:05 crc kubenswrapper[4735]: I0128 10:51:05.430432 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 10:51:05 crc kubenswrapper[4735]: I0128 10:51:05.431224 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00942994fe73689ac14efdb488c14b528c781f0c4569d2a1e97e5dd87ba17b37"} pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 10:51:05 crc kubenswrapper[4735]: I0128 10:51:05.431282 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" containerID="cri-o://00942994fe73689ac14efdb488c14b528c781f0c4569d2a1e97e5dd87ba17b37" gracePeriod=600 Jan 28 10:51:06 crc kubenswrapper[4735]: I0128 10:51:06.407602 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerID="00942994fe73689ac14efdb488c14b528c781f0c4569d2a1e97e5dd87ba17b37" exitCode=0 Jan 28 10:51:06 crc kubenswrapper[4735]: I0128 10:51:06.407709 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerDied","Data":"00942994fe73689ac14efdb488c14b528c781f0c4569d2a1e97e5dd87ba17b37"} Jan 28 10:51:06 crc kubenswrapper[4735]: I0128 10:51:06.408261 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerStarted","Data":"f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4"} Jan 28 10:51:06 crc kubenswrapper[4735]: I0128 10:51:06.408302 4735 scope.go:117] "RemoveContainer" containerID="6df16c80cb062d26dae98a871d5f5c7fe54b69220e5da907e7eb695b9c975a9d" Jan 28 10:51:14 crc kubenswrapper[4735]: I0128 10:51:14.352642 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d79b2"] Jan 28 10:51:14 crc kubenswrapper[4735]: E0128 10:51:14.354106 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7b1c07-b070-43eb-9226-75e1eafe1d30" containerName="extract-utilities" Jan 28 10:51:14 crc kubenswrapper[4735]: I0128 10:51:14.354141 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7b1c07-b070-43eb-9226-75e1eafe1d30" containerName="extract-utilities" Jan 28 10:51:14 crc kubenswrapper[4735]: E0128 10:51:14.354177 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7b1c07-b070-43eb-9226-75e1eafe1d30" containerName="registry-server" Jan 28 10:51:14 crc kubenswrapper[4735]: I0128 10:51:14.354196 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7b1c07-b070-43eb-9226-75e1eafe1d30" containerName="registry-server" Jan 28 10:51:14 crc kubenswrapper[4735]: E0128 10:51:14.354230 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7b1c07-b070-43eb-9226-75e1eafe1d30" containerName="extract-content" Jan 28 10:51:14 crc kubenswrapper[4735]: I0128 10:51:14.354247 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7b1c07-b070-43eb-9226-75e1eafe1d30" containerName="extract-content" Jan 28 10:51:14 crc kubenswrapper[4735]: I0128 10:51:14.354784 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7b1c07-b070-43eb-9226-75e1eafe1d30" containerName="registry-server" Jan 28 10:51:14 crc kubenswrapper[4735]: I0128 10:51:14.357312 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d79b2" Jan 28 10:51:14 crc kubenswrapper[4735]: I0128 10:51:14.366154 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d79b2"] Jan 28 10:51:14 crc kubenswrapper[4735]: I0128 10:51:14.449242 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t4bj\" (UniqueName: \"kubernetes.io/projected/07dd3570-7fe8-4a24-8c94-28c4dc1e80db-kube-api-access-5t4bj\") pod \"redhat-operators-d79b2\" (UID: \"07dd3570-7fe8-4a24-8c94-28c4dc1e80db\") " pod="openshift-marketplace/redhat-operators-d79b2" Jan 28 10:51:14 crc kubenswrapper[4735]: I0128 10:51:14.449571 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07dd3570-7fe8-4a24-8c94-28c4dc1e80db-catalog-content\") pod \"redhat-operators-d79b2\" (UID: \"07dd3570-7fe8-4a24-8c94-28c4dc1e80db\") " pod="openshift-marketplace/redhat-operators-d79b2" Jan 28 10:51:14 crc kubenswrapper[4735]: I0128 10:51:14.449660 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07dd3570-7fe8-4a24-8c94-28c4dc1e80db-utilities\") pod \"redhat-operators-d79b2\" (UID: \"07dd3570-7fe8-4a24-8c94-28c4dc1e80db\") " pod="openshift-marketplace/redhat-operators-d79b2" Jan 28 10:51:14 crc kubenswrapper[4735]: I0128 10:51:14.551794 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t4bj\" (UniqueName: \"kubernetes.io/projected/07dd3570-7fe8-4a24-8c94-28c4dc1e80db-kube-api-access-5t4bj\") pod \"redhat-operators-d79b2\" (UID: \"07dd3570-7fe8-4a24-8c94-28c4dc1e80db\") " pod="openshift-marketplace/redhat-operators-d79b2" Jan 28 10:51:14 crc kubenswrapper[4735]: I0128 10:51:14.551958 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07dd3570-7fe8-4a24-8c94-28c4dc1e80db-catalog-content\") pod \"redhat-operators-d79b2\" (UID: \"07dd3570-7fe8-4a24-8c94-28c4dc1e80db\") " pod="openshift-marketplace/redhat-operators-d79b2" Jan 28 10:51:14 crc kubenswrapper[4735]: I0128 10:51:14.552042 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07dd3570-7fe8-4a24-8c94-28c4dc1e80db-utilities\") pod \"redhat-operators-d79b2\" (UID: \"07dd3570-7fe8-4a24-8c94-28c4dc1e80db\") " pod="openshift-marketplace/redhat-operators-d79b2" Jan 28 10:51:14 crc kubenswrapper[4735]: I0128 10:51:14.552466 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07dd3570-7fe8-4a24-8c94-28c4dc1e80db-catalog-content\") pod \"redhat-operators-d79b2\" (UID: \"07dd3570-7fe8-4a24-8c94-28c4dc1e80db\") " pod="openshift-marketplace/redhat-operators-d79b2" Jan 28 10:51:14 crc kubenswrapper[4735]: I0128 10:51:14.552505 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07dd3570-7fe8-4a24-8c94-28c4dc1e80db-utilities\") pod \"redhat-operators-d79b2\" (UID: \"07dd3570-7fe8-4a24-8c94-28c4dc1e80db\") " pod="openshift-marketplace/redhat-operators-d79b2" Jan 28 10:51:14 crc kubenswrapper[4735]: I0128 10:51:14.572203 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t4bj\" (UniqueName: \"kubernetes.io/projected/07dd3570-7fe8-4a24-8c94-28c4dc1e80db-kube-api-access-5t4bj\") pod \"redhat-operators-d79b2\" (UID: \"07dd3570-7fe8-4a24-8c94-28c4dc1e80db\") " pod="openshift-marketplace/redhat-operators-d79b2" Jan 28 10:51:14 crc kubenswrapper[4735]: I0128 10:51:14.677611 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d79b2" Jan 28 10:51:15 crc kubenswrapper[4735]: I0128 10:51:15.150313 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d79b2"] Jan 28 10:51:15 crc kubenswrapper[4735]: I0128 10:51:15.488015 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d79b2" event={"ID":"07dd3570-7fe8-4a24-8c94-28c4dc1e80db","Type":"ContainerStarted","Data":"0582619d867094394dd17d47b9ee702d01c0ec7c49735b63320734dc5f69d40b"} Jan 28 10:51:16 crc kubenswrapper[4735]: I0128 10:51:16.499827 4735 generic.go:334] "Generic (PLEG): container finished" podID="07dd3570-7fe8-4a24-8c94-28c4dc1e80db" containerID="4226e6ec3ed23f71c75ebd26f42f0db2b88dd25a02b8bfe868dc7dee18f6c955" exitCode=0 Jan 28 10:51:16 crc kubenswrapper[4735]: I0128 10:51:16.499898 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d79b2" event={"ID":"07dd3570-7fe8-4a24-8c94-28c4dc1e80db","Type":"ContainerDied","Data":"4226e6ec3ed23f71c75ebd26f42f0db2b88dd25a02b8bfe868dc7dee18f6c955"} Jan 28 10:51:19 crc kubenswrapper[4735]: I0128 10:51:19.221393 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-fzvkk_432a29bc-0436-4d07-bb40-25bb08020746/control-plane-machine-set-operator/0.log" Jan 28 10:51:19 crc kubenswrapper[4735]: I0128 10:51:19.434050 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wpjcc_c5d19be7-214e-4428-835a-5c325772fe6e/kube-rbac-proxy/0.log" Jan 28 10:51:19 crc kubenswrapper[4735]: I0128 10:51:19.481306 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wpjcc_c5d19be7-214e-4428-835a-5c325772fe6e/machine-api-operator/0.log" Jan 28 10:51:32 crc kubenswrapper[4735]: I0128 10:51:32.655197 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d79b2" event={"ID":"07dd3570-7fe8-4a24-8c94-28c4dc1e80db","Type":"ContainerStarted","Data":"bc9f95536416be9de7ca8d70f770fc2496a96c420b22e50143fcb1ffaeb720e5"} Jan 28 10:51:33 crc kubenswrapper[4735]: I0128 10:51:33.171591 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-hw27n_cb9ea2bc-7e7e-4b81-ae1e-64ce3ec17177/cert-manager-controller/0.log" Jan 28 10:51:33 crc kubenswrapper[4735]: I0128 10:51:33.352892 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-gbqg5_6f21432c-a03e-4600-8c86-65159cf299ba/cert-manager-cainjector/0.log" Jan 28 10:51:33 crc kubenswrapper[4735]: I0128 10:51:33.402258 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-zxd5m_6d0afe1e-f921-43c9-8f44-3498f0f45b85/cert-manager-webhook/0.log" Jan 28 10:51:34 crc kubenswrapper[4735]: I0128 10:51:34.674529 4735 generic.go:334] "Generic (PLEG): container finished" podID="07dd3570-7fe8-4a24-8c94-28c4dc1e80db" containerID="bc9f95536416be9de7ca8d70f770fc2496a96c420b22e50143fcb1ffaeb720e5" exitCode=0 Jan 28 10:51:34 crc kubenswrapper[4735]: I0128 10:51:34.674658 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d79b2" event={"ID":"07dd3570-7fe8-4a24-8c94-28c4dc1e80db","Type":"ContainerDied","Data":"bc9f95536416be9de7ca8d70f770fc2496a96c420b22e50143fcb1ffaeb720e5"} Jan 28 10:51:35 crc kubenswrapper[4735]: I0128 10:51:35.685326 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d79b2" event={"ID":"07dd3570-7fe8-4a24-8c94-28c4dc1e80db","Type":"ContainerStarted","Data":"c957b80796b2256052acb477ff41d19d07e05049966bdfc2c62bf161e573fa23"} Jan 28 10:51:35 crc kubenswrapper[4735]: I0128 10:51:35.713461 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d79b2" podStartSLOduration=3.071611227 podStartE2EDuration="21.713439783s" podCreationTimestamp="2026-01-28 10:51:14 +0000 UTC" firstStartedPulling="2026-01-28 10:51:16.502208289 +0000 UTC m=+4129.651872672" lastFinishedPulling="2026-01-28 10:51:35.144036855 +0000 UTC m=+4148.293701228" observedRunningTime="2026-01-28 10:51:35.702226257 +0000 UTC m=+4148.851890630" watchObservedRunningTime="2026-01-28 10:51:35.713439783 +0000 UTC m=+4148.863104146" Jan 28 10:51:44 crc kubenswrapper[4735]: I0128 10:51:44.678420 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d79b2" Jan 28 10:51:44 crc kubenswrapper[4735]: I0128 10:51:44.679090 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d79b2" Jan 28 10:51:45 crc kubenswrapper[4735]: I0128 10:51:45.728623 4735 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d79b2" podUID="07dd3570-7fe8-4a24-8c94-28c4dc1e80db" containerName="registry-server" probeResult="failure" output=< Jan 28 10:51:45 crc kubenswrapper[4735]: timeout: failed to connect service ":50051" within 1s Jan 28 10:51:45 crc kubenswrapper[4735]: > Jan 28 10:51:48 crc kubenswrapper[4735]: I0128 10:51:48.046514 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-grbj7_5a554582-3291-4541-919b-b58a23e818df/nmstate-console-plugin/0.log" Jan 28 10:51:48 crc kubenswrapper[4735]: I0128 10:51:48.059423 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-7pft9_8d7d44ef-aa10-45a5-9ee3-a61c6178fb52/nmstate-handler/0.log" Jan 28 10:51:48 crc kubenswrapper[4735]: I0128 10:51:48.245463 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-v9d5m_855f8b5d-befc-4d5d-9653-c9820ca8eafd/kube-rbac-proxy/0.log" Jan 28 10:51:48 crc kubenswrapper[4735]: I0128 10:51:48.306645 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-v9d5m_855f8b5d-befc-4d5d-9653-c9820ca8eafd/nmstate-metrics/0.log" Jan 28 10:51:48 crc kubenswrapper[4735]: I0128 10:51:48.422791 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-7nsxc_d505ac14-ad3f-45fc-a07c-c10acaeb03d2/nmstate-operator/0.log" Jan 28 10:51:48 crc kubenswrapper[4735]: I0128 10:51:48.479795 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-h28tw_19bcd978-0715-474c-8abf-6028efe34478/nmstate-webhook/0.log" Jan 28 10:51:54 crc kubenswrapper[4735]: I0128 10:51:54.738282 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d79b2" Jan 28 10:51:54 crc kubenswrapper[4735]: I0128 10:51:54.801396 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d79b2" Jan 28 10:51:54 crc kubenswrapper[4735]: I0128 10:51:54.869854 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d79b2"] Jan 28 10:51:54 crc kubenswrapper[4735]: I0128 10:51:54.978570 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sfgcd"] Jan 28 10:51:54 crc kubenswrapper[4735]: I0128 10:51:54.978863 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sfgcd" podUID="47112b11-5b61-4d2c-8c6b-059b673f74c2" containerName="registry-server" containerID="cri-o://1d60f8e42c34d582477e1db4770742a7872d0c975451dc1b46503a4a620bed1b" gracePeriod=2 Jan 28 10:51:55 crc kubenswrapper[4735]: I0128 10:51:55.432532 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfgcd" Jan 28 10:51:55 crc kubenswrapper[4735]: I0128 10:51:55.547368 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47112b11-5b61-4d2c-8c6b-059b673f74c2-utilities\") pod \"47112b11-5b61-4d2c-8c6b-059b673f74c2\" (UID: \"47112b11-5b61-4d2c-8c6b-059b673f74c2\") " Jan 28 10:51:55 crc kubenswrapper[4735]: I0128 10:51:55.547483 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47112b11-5b61-4d2c-8c6b-059b673f74c2-catalog-content\") pod \"47112b11-5b61-4d2c-8c6b-059b673f74c2\" (UID: \"47112b11-5b61-4d2c-8c6b-059b673f74c2\") " Jan 28 10:51:55 crc kubenswrapper[4735]: I0128 10:51:55.547902 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t8tl\" (UniqueName: \"kubernetes.io/projected/47112b11-5b61-4d2c-8c6b-059b673f74c2-kube-api-access-8t8tl\") pod \"47112b11-5b61-4d2c-8c6b-059b673f74c2\" (UID: \"47112b11-5b61-4d2c-8c6b-059b673f74c2\") " Jan 28 10:51:55 crc kubenswrapper[4735]: I0128 10:51:55.553971 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47112b11-5b61-4d2c-8c6b-059b673f74c2-utilities" (OuterVolumeSpecName: "utilities") pod "47112b11-5b61-4d2c-8c6b-059b673f74c2" (UID: "47112b11-5b61-4d2c-8c6b-059b673f74c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:51:55 crc kubenswrapper[4735]: I0128 10:51:55.650533 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47112b11-5b61-4d2c-8c6b-059b673f74c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:51:55 crc kubenswrapper[4735]: I0128 10:51:55.692292 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47112b11-5b61-4d2c-8c6b-059b673f74c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47112b11-5b61-4d2c-8c6b-059b673f74c2" (UID: "47112b11-5b61-4d2c-8c6b-059b673f74c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:51:55 crc kubenswrapper[4735]: I0128 10:51:55.751963 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47112b11-5b61-4d2c-8c6b-059b673f74c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:51:55 crc kubenswrapper[4735]: I0128 10:51:55.871271 4735 generic.go:334] "Generic (PLEG): container finished" podID="47112b11-5b61-4d2c-8c6b-059b673f74c2" containerID="1d60f8e42c34d582477e1db4770742a7872d0c975451dc1b46503a4a620bed1b" exitCode=0 Jan 28 10:51:55 crc kubenswrapper[4735]: I0128 10:51:55.871343 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfgcd" Jan 28 10:51:55 crc kubenswrapper[4735]: I0128 10:51:55.871385 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfgcd" event={"ID":"47112b11-5b61-4d2c-8c6b-059b673f74c2","Type":"ContainerDied","Data":"1d60f8e42c34d582477e1db4770742a7872d0c975451dc1b46503a4a620bed1b"} Jan 28 10:51:55 crc kubenswrapper[4735]: I0128 10:51:55.871450 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfgcd" event={"ID":"47112b11-5b61-4d2c-8c6b-059b673f74c2","Type":"ContainerDied","Data":"da7ff6e512cee772eabe5d9d7ca6a90d9eec484519c766d12a132458960acd22"} Jan 28 10:51:55 crc kubenswrapper[4735]: I0128 10:51:55.871474 4735 scope.go:117] "RemoveContainer" containerID="1d60f8e42c34d582477e1db4770742a7872d0c975451dc1b46503a4a620bed1b" Jan 28 10:51:55 crc kubenswrapper[4735]: I0128 10:51:55.893219 4735 scope.go:117] "RemoveContainer" containerID="567f1a430e47b31d6388ebe3a54fa7458c5ae13a970df0307f23c79ce876a489" Jan 28 10:51:55 crc kubenswrapper[4735]: I0128 10:51:55.964711 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47112b11-5b61-4d2c-8c6b-059b673f74c2-kube-api-access-8t8tl" (OuterVolumeSpecName: "kube-api-access-8t8tl") pod "47112b11-5b61-4d2c-8c6b-059b673f74c2" (UID: "47112b11-5b61-4d2c-8c6b-059b673f74c2"). InnerVolumeSpecName "kube-api-access-8t8tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:51:55 crc kubenswrapper[4735]: I0128 10:51:55.982103 4735 scope.go:117] "RemoveContainer" containerID="af5d30dff2f9609cd4d74e44375e4f9981f29b7b77fcac4729b0b659a9f0e904" Jan 28 10:51:56 crc kubenswrapper[4735]: I0128 10:51:56.058231 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t8tl\" (UniqueName: \"kubernetes.io/projected/47112b11-5b61-4d2c-8c6b-059b673f74c2-kube-api-access-8t8tl\") on node \"crc\" DevicePath \"\"" Jan 28 10:51:56 crc kubenswrapper[4735]: I0128 10:51:56.112855 4735 scope.go:117] "RemoveContainer" containerID="af5d30dff2f9609cd4d74e44375e4f9981f29b7b77fcac4729b0b659a9f0e904" Jan 28 10:51:56 crc kubenswrapper[4735]: E0128 10:51:56.160200 4735 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_extract-utilities_redhat-operators-sfgcd_openshift-marketplace_47112b11-5b61-4d2c-8c6b-059b673f74c2_0 in pod sandbox da7ff6e512cee772eabe5d9d7ca6a90d9eec484519c766d12a132458960acd22 from index: no such id: 'af5d30dff2f9609cd4d74e44375e4f9981f29b7b77fcac4729b0b659a9f0e904'" containerID="af5d30dff2f9609cd4d74e44375e4f9981f29b7b77fcac4729b0b659a9f0e904" Jan 28 10:51:56 crc kubenswrapper[4735]: I0128 10:51:56.160245 4735 scope.go:117] "RemoveContainer" containerID="1d60f8e42c34d582477e1db4770742a7872d0c975451dc1b46503a4a620bed1b" Jan 28 10:51:56 crc kubenswrapper[4735]: E0128 10:51:56.160260 4735 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_extract-utilities_redhat-operators-sfgcd_openshift-marketplace_47112b11-5b61-4d2c-8c6b-059b673f74c2_0 in pod sandbox da7ff6e512cee772eabe5d9d7ca6a90d9eec484519c766d12a132458960acd22 from index: no such id: 'af5d30dff2f9609cd4d74e44375e4f9981f29b7b77fcac4729b0b659a9f0e904'" containerID="af5d30dff2f9609cd4d74e44375e4f9981f29b7b77fcac4729b0b659a9f0e904" Jan 28 10:51:56 crc kubenswrapper[4735]: E0128 10:51:56.179134 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d60f8e42c34d582477e1db4770742a7872d0c975451dc1b46503a4a620bed1b\": container with ID starting with 1d60f8e42c34d582477e1db4770742a7872d0c975451dc1b46503a4a620bed1b not found: ID does not exist" containerID="1d60f8e42c34d582477e1db4770742a7872d0c975451dc1b46503a4a620bed1b" Jan 28 10:51:56 crc kubenswrapper[4735]: I0128 10:51:56.179180 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d60f8e42c34d582477e1db4770742a7872d0c975451dc1b46503a4a620bed1b"} err="failed to get container status \"1d60f8e42c34d582477e1db4770742a7872d0c975451dc1b46503a4a620bed1b\": rpc error: code = NotFound desc = could not find container \"1d60f8e42c34d582477e1db4770742a7872d0c975451dc1b46503a4a620bed1b\": container with ID starting with 1d60f8e42c34d582477e1db4770742a7872d0c975451dc1b46503a4a620bed1b not found: ID does not exist" Jan 28 10:51:56 crc kubenswrapper[4735]: I0128 10:51:56.179208 4735 scope.go:117] "RemoveContainer" containerID="567f1a430e47b31d6388ebe3a54fa7458c5ae13a970df0307f23c79ce876a489" Jan 28 10:51:56 crc kubenswrapper[4735]: E0128 10:51:56.184322 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"567f1a430e47b31d6388ebe3a54fa7458c5ae13a970df0307f23c79ce876a489\": container with ID starting with 567f1a430e47b31d6388ebe3a54fa7458c5ae13a970df0307f23c79ce876a489 not found: ID does not exist" containerID="567f1a430e47b31d6388ebe3a54fa7458c5ae13a970df0307f23c79ce876a489" Jan 28 10:51:56 crc kubenswrapper[4735]: I0128 10:51:56.184371 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567f1a430e47b31d6388ebe3a54fa7458c5ae13a970df0307f23c79ce876a489"} err="failed to get container status \"567f1a430e47b31d6388ebe3a54fa7458c5ae13a970df0307f23c79ce876a489\": rpc error: code = NotFound desc = could not find container \"567f1a430e47b31d6388ebe3a54fa7458c5ae13a970df0307f23c79ce876a489\": container with ID starting with 567f1a430e47b31d6388ebe3a54fa7458c5ae13a970df0307f23c79ce876a489 not found: ID does not exist" Jan 28 10:51:56 crc kubenswrapper[4735]: I0128 10:51:56.184394 4735 scope.go:117] "RemoveContainer" containerID="af5d30dff2f9609cd4d74e44375e4f9981f29b7b77fcac4729b0b659a9f0e904" Jan 28 10:51:56 crc kubenswrapper[4735]: E0128 10:51:56.190006 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af5d30dff2f9609cd4d74e44375e4f9981f29b7b77fcac4729b0b659a9f0e904\": container with ID starting with af5d30dff2f9609cd4d74e44375e4f9981f29b7b77fcac4729b0b659a9f0e904 not found: ID does not exist" containerID="af5d30dff2f9609cd4d74e44375e4f9981f29b7b77fcac4729b0b659a9f0e904" Jan 28 10:51:56 crc kubenswrapper[4735]: I0128 10:51:56.190065 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af5d30dff2f9609cd4d74e44375e4f9981f29b7b77fcac4729b0b659a9f0e904"} err="failed to get container status \"af5d30dff2f9609cd4d74e44375e4f9981f29b7b77fcac4729b0b659a9f0e904\": rpc error: code = NotFound desc = could not find container \"af5d30dff2f9609cd4d74e44375e4f9981f29b7b77fcac4729b0b659a9f0e904\": container with ID starting with af5d30dff2f9609cd4d74e44375e4f9981f29b7b77fcac4729b0b659a9f0e904 not found: ID does not exist" Jan 28 10:51:56 crc kubenswrapper[4735]: I0128 10:51:56.221841 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sfgcd"] Jan 28 10:51:56 crc kubenswrapper[4735]: I0128 10:51:56.235849 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sfgcd"] Jan 28 10:51:57 crc kubenswrapper[4735]: I0128 10:51:57.515722 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47112b11-5b61-4d2c-8c6b-059b673f74c2" path="/var/lib/kubelet/pods/47112b11-5b61-4d2c-8c6b-059b673f74c2/volumes" Jan 28 10:52:19 crc kubenswrapper[4735]: I0128 10:52:19.679797 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lsbrt_4cf87a70-49c9-4cb3-a2be-afd200ca892b/kube-rbac-proxy/0.log" Jan 28 10:52:19 crc kubenswrapper[4735]: I0128 10:52:19.893203 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lsbrt_4cf87a70-49c9-4cb3-a2be-afd200ca892b/controller/0.log" Jan 28 10:52:19 crc kubenswrapper[4735]: I0128 10:52:19.967396 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-frr-files/0.log" Jan 28 10:52:20 crc kubenswrapper[4735]: I0128 10:52:20.101167 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-frr-files/0.log" Jan 28 10:52:20 crc kubenswrapper[4735]: I0128 10:52:20.134016 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-reloader/0.log" Jan 28 10:52:20 crc kubenswrapper[4735]: I0128 10:52:20.134071 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-metrics/0.log" Jan 28 10:52:20 crc kubenswrapper[4735]: I0128 10:52:20.169998 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-reloader/0.log" Jan 28 10:52:20 crc kubenswrapper[4735]: I0128 10:52:20.357620 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-frr-files/0.log" Jan 28 10:52:20 crc kubenswrapper[4735]: I0128 10:52:20.365483 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-reloader/0.log" Jan 28 10:52:20 crc kubenswrapper[4735]: I0128 10:52:20.382507 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-metrics/0.log" Jan 28 10:52:20 crc kubenswrapper[4735]: I0128 10:52:20.388040 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-metrics/0.log" Jan 28 10:52:20 crc kubenswrapper[4735]: I0128 10:52:20.554334 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-frr-files/0.log" Jan 28 10:52:20 crc kubenswrapper[4735]: I0128 10:52:20.628838 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-metrics/0.log" Jan 28 10:52:20 crc kubenswrapper[4735]: I0128 10:52:20.629462 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/controller/0.log" Jan 28 10:52:20 crc kubenswrapper[4735]: I0128 10:52:20.643255 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/cp-reloader/0.log" Jan 28 10:52:20 crc kubenswrapper[4735]: I0128 10:52:20.880590 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/kube-rbac-proxy-frr/0.log" Jan 28 10:52:20 crc kubenswrapper[4735]: I0128 10:52:20.891285 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/kube-rbac-proxy/0.log" Jan 28 10:52:20 crc kubenswrapper[4735]: I0128 10:52:20.893259 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/frr-metrics/0.log" Jan 28 10:52:21 crc kubenswrapper[4735]: I0128 10:52:21.184423 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/reloader/0.log" Jan 28 10:52:21 crc kubenswrapper[4735]: I0128 10:52:21.184595 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-qk849_0042167e-002b-4fa7-8ff3-161e1326fae1/frr-k8s-webhook-server/0.log" Jan 28 10:52:21 crc kubenswrapper[4735]: I0128 10:52:21.439300 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-54979c8865-g4cx7_8d49adc6-130a-4b8b-8c7b-ab35d0fc5bad/manager/0.log" Jan 28 10:52:21 crc kubenswrapper[4735]: I0128 10:52:21.829424 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-cb99f596-8vf2t_3da5666b-ceb3-488f-ac22-1f25ba7cea7f/webhook-server/0.log" Jan 28 10:52:21 crc kubenswrapper[4735]: I0128 10:52:21.881684 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wkrfv_f582a362-710c-4f62-988c-e09f4f34da10/kube-rbac-proxy/0.log" Jan 28 10:52:22 crc kubenswrapper[4735]: I0128 10:52:22.057709 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g8zjj_40f6778a-2f24-4426-9292-5a394fe09d3d/frr/0.log" Jan 28 10:52:22 crc kubenswrapper[4735]: I0128 10:52:22.413884 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wkrfv_f582a362-710c-4f62-988c-e09f4f34da10/speaker/0.log" Jan 28 10:52:37 crc kubenswrapper[4735]: I0128 10:52:37.776130 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46_54d4636f-758a-41b9-9491-69cb7f38afbc/util/0.log" Jan 28 10:52:38 crc kubenswrapper[4735]: I0128 10:52:38.053966 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46_54d4636f-758a-41b9-9491-69cb7f38afbc/pull/0.log" Jan 28 10:52:38 crc kubenswrapper[4735]: I0128 10:52:38.056951 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46_54d4636f-758a-41b9-9491-69cb7f38afbc/pull/0.log" Jan 28 10:52:38 crc kubenswrapper[4735]: I0128 10:52:38.061891 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46_54d4636f-758a-41b9-9491-69cb7f38afbc/util/0.log" Jan 28 10:52:38 crc kubenswrapper[4735]: I0128 10:52:38.259103 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46_54d4636f-758a-41b9-9491-69cb7f38afbc/util/0.log" Jan 28 10:52:38 crc kubenswrapper[4735]: I0128 10:52:38.259508 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46_54d4636f-758a-41b9-9491-69cb7f38afbc/extract/0.log" Jan 28 10:52:38 crc kubenswrapper[4735]: I0128 10:52:38.291361 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxrw46_54d4636f-758a-41b9-9491-69cb7f38afbc/pull/0.log" Jan 28 10:52:38 crc kubenswrapper[4735]: I0128 10:52:38.810919 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb_03e770c2-1137-4e04-9775-a1187be4f65d/util/0.log" Jan 28 10:52:38 crc kubenswrapper[4735]: I0128 10:52:38.999413 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb_03e770c2-1137-4e04-9775-a1187be4f65d/util/0.log" Jan 28 10:52:39 crc kubenswrapper[4735]: I0128 10:52:39.006085 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb_03e770c2-1137-4e04-9775-a1187be4f65d/pull/0.log" Jan 28 10:52:39 crc kubenswrapper[4735]: I0128 10:52:39.032119 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb_03e770c2-1137-4e04-9775-a1187be4f65d/pull/0.log" Jan 28 10:52:39 crc kubenswrapper[4735]: I0128 10:52:39.247156 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb_03e770c2-1137-4e04-9775-a1187be4f65d/util/0.log" Jan 28 10:52:39 crc kubenswrapper[4735]: I0128 10:52:39.435406 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x2lhf_578b28be-496b-4970-91ff-75695e48b20f/extract-utilities/0.log" Jan 28 10:52:39 crc kubenswrapper[4735]: I0128 10:52:39.605078 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x2lhf_578b28be-496b-4970-91ff-75695e48b20f/extract-content/0.log" Jan 28 10:52:39 crc kubenswrapper[4735]: I0128 10:52:39.639236 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb_03e770c2-1137-4e04-9775-a1187be4f65d/extract/0.log" Jan 28 10:52:39 crc kubenswrapper[4735]: I0128 10:52:39.658812 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rwngb_03e770c2-1137-4e04-9775-a1187be4f65d/pull/0.log" Jan 28 10:52:39 crc kubenswrapper[4735]: I0128 10:52:39.672927 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x2lhf_578b28be-496b-4970-91ff-75695e48b20f/extract-utilities/0.log" Jan 28 10:52:39 crc kubenswrapper[4735]: I0128 10:52:39.792939 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x2lhf_578b28be-496b-4970-91ff-75695e48b20f/extract-content/0.log" Jan 28 10:52:39 crc kubenswrapper[4735]: I0128 10:52:39.992775 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x2lhf_578b28be-496b-4970-91ff-75695e48b20f/extract-content/0.log" Jan 28 10:52:40 crc kubenswrapper[4735]: I0128 10:52:40.003406 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x2lhf_578b28be-496b-4970-91ff-75695e48b20f/extract-utilities/0.log" Jan 28 10:52:40 crc kubenswrapper[4735]: I0128 10:52:40.388095 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-swb69_0ed89506-d111-4bdd-b933-d2325e4b897f/extract-utilities/0.log" Jan 28 10:52:40 crc kubenswrapper[4735]: I0128 10:52:40.490766 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x2lhf_578b28be-496b-4970-91ff-75695e48b20f/registry-server/0.log" Jan 28 10:52:40 crc kubenswrapper[4735]: I0128 10:52:40.591910 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-swb69_0ed89506-d111-4bdd-b933-d2325e4b897f/extract-utilities/0.log" Jan 28 10:52:40 crc kubenswrapper[4735]: I0128 10:52:40.614830 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-swb69_0ed89506-d111-4bdd-b933-d2325e4b897f/extract-content/0.log" Jan 28 10:52:40 crc kubenswrapper[4735]: I0128 10:52:40.639661 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-swb69_0ed89506-d111-4bdd-b933-d2325e4b897f/extract-content/0.log" Jan 28 10:52:40 crc kubenswrapper[4735]: I0128 10:52:40.811115 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-swb69_0ed89506-d111-4bdd-b933-d2325e4b897f/extract-content/0.log" Jan 28 10:52:40 crc kubenswrapper[4735]: I0128 10:52:40.836621 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-swb69_0ed89506-d111-4bdd-b933-d2325e4b897f/extract-utilities/0.log" Jan 28 10:52:41 crc kubenswrapper[4735]: I0128 10:52:41.001982 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mrxf2_761659b6-1552-4fd3-8e5a-10bf29a772b3/marketplace-operator/0.log" Jan 28 10:52:41 crc kubenswrapper[4735]: I0128 10:52:41.155566 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s94c6_f435b834-ada9-4e43-be5a-d7d137c0dda5/extract-utilities/0.log" Jan 28 10:52:41 crc kubenswrapper[4735]: I0128 10:52:41.283950 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s94c6_f435b834-ada9-4e43-be5a-d7d137c0dda5/extract-utilities/0.log" Jan 28 10:52:41 crc kubenswrapper[4735]: I0128 10:52:41.315658 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-swb69_0ed89506-d111-4bdd-b933-d2325e4b897f/registry-server/0.log" Jan 28 10:52:41 crc kubenswrapper[4735]: I0128 10:52:41.339285 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s94c6_f435b834-ada9-4e43-be5a-d7d137c0dda5/extract-content/0.log" Jan 28 10:52:41 crc kubenswrapper[4735]: I0128 10:52:41.400245 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s94c6_f435b834-ada9-4e43-be5a-d7d137c0dda5/extract-content/0.log" Jan 28 10:52:41 crc kubenswrapper[4735]: I0128 10:52:41.554310 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s94c6_f435b834-ada9-4e43-be5a-d7d137c0dda5/extract-utilities/0.log" Jan 28 10:52:41 crc kubenswrapper[4735]: I0128 10:52:41.604493 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s94c6_f435b834-ada9-4e43-be5a-d7d137c0dda5/extract-content/0.log" Jan 28 10:52:41 crc kubenswrapper[4735]: I0128 10:52:41.668689 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s94c6_f435b834-ada9-4e43-be5a-d7d137c0dda5/registry-server/0.log" Jan 28 10:52:41 crc kubenswrapper[4735]: I0128 10:52:41.681153 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d79b2_07dd3570-7fe8-4a24-8c94-28c4dc1e80db/extract-utilities/0.log" Jan 28 10:52:41 crc kubenswrapper[4735]: I0128 10:52:41.831184 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d79b2_07dd3570-7fe8-4a24-8c94-28c4dc1e80db/extract-content/0.log" Jan 28 10:52:41 crc kubenswrapper[4735]: I0128 10:52:41.843574 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d79b2_07dd3570-7fe8-4a24-8c94-28c4dc1e80db/extract-content/0.log" Jan 28 10:52:41 crc kubenswrapper[4735]: I0128 10:52:41.848445 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d79b2_07dd3570-7fe8-4a24-8c94-28c4dc1e80db/extract-utilities/0.log" Jan 28 10:52:42 crc kubenswrapper[4735]: I0128 10:52:42.010201 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d79b2_07dd3570-7fe8-4a24-8c94-28c4dc1e80db/extract-utilities/0.log" Jan 28 10:52:42 crc kubenswrapper[4735]: I0128 10:52:42.012784 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d79b2_07dd3570-7fe8-4a24-8c94-28c4dc1e80db/extract-content/0.log" Jan 28 10:52:42 crc kubenswrapper[4735]: I0128 10:52:42.102287 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-d79b2_07dd3570-7fe8-4a24-8c94-28c4dc1e80db/registry-server/0.log" Jan 28 10:52:56 crc kubenswrapper[4735]: I0128 10:52:56.348918 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9bphv"] Jan 28 10:52:56 crc kubenswrapper[4735]: E0128 10:52:56.349952 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47112b11-5b61-4d2c-8c6b-059b673f74c2" containerName="extract-utilities" Jan 28 10:52:56 crc kubenswrapper[4735]: I0128 10:52:56.349970 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="47112b11-5b61-4d2c-8c6b-059b673f74c2" containerName="extract-utilities" Jan 28 10:52:56 crc kubenswrapper[4735]: E0128 10:52:56.349994 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47112b11-5b61-4d2c-8c6b-059b673f74c2" containerName="extract-content" Jan 28 10:52:56 crc kubenswrapper[4735]: I0128 10:52:56.350003 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="47112b11-5b61-4d2c-8c6b-059b673f74c2" containerName="extract-content" Jan 28 10:52:56 crc kubenswrapper[4735]: E0128 10:52:56.350028 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47112b11-5b61-4d2c-8c6b-059b673f74c2" containerName="registry-server" Jan 28 10:52:56 crc kubenswrapper[4735]: I0128 10:52:56.350035 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="47112b11-5b61-4d2c-8c6b-059b673f74c2" containerName="registry-server" Jan 28 10:52:56 crc kubenswrapper[4735]: I0128 10:52:56.350242 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="47112b11-5b61-4d2c-8c6b-059b673f74c2" containerName="registry-server" Jan 28 10:52:56 crc kubenswrapper[4735]: I0128 10:52:56.353177 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bphv" Jan 28 10:52:56 crc kubenswrapper[4735]: I0128 10:52:56.363056 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bphv"] Jan 28 10:52:56 crc kubenswrapper[4735]: I0128 10:52:56.421444 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvz6w\" (UniqueName: \"kubernetes.io/projected/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-kube-api-access-pvz6w\") pod \"redhat-marketplace-9bphv\" (UID: \"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5\") " pod="openshift-marketplace/redhat-marketplace-9bphv" Jan 28 10:52:56 crc kubenswrapper[4735]: I0128 10:52:56.421522 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-catalog-content\") pod \"redhat-marketplace-9bphv\" (UID: \"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5\") " pod="openshift-marketplace/redhat-marketplace-9bphv" Jan 28 10:52:56 crc kubenswrapper[4735]: I0128 10:52:56.421660 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-utilities\") pod \"redhat-marketplace-9bphv\" (UID: \"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5\") " pod="openshift-marketplace/redhat-marketplace-9bphv" Jan 28 10:52:56 crc kubenswrapper[4735]: I0128 10:52:56.523811 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvz6w\" (UniqueName: \"kubernetes.io/projected/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-kube-api-access-pvz6w\") pod \"redhat-marketplace-9bphv\" (UID: \"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5\") " pod="openshift-marketplace/redhat-marketplace-9bphv" Jan 28 10:52:56 crc kubenswrapper[4735]: I0128 10:52:56.524066 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-catalog-content\") pod \"redhat-marketplace-9bphv\" (UID: \"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5\") " pod="openshift-marketplace/redhat-marketplace-9bphv" Jan 28 10:52:56 crc kubenswrapper[4735]: I0128 10:52:56.524210 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-utilities\") pod \"redhat-marketplace-9bphv\" (UID: \"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5\") " pod="openshift-marketplace/redhat-marketplace-9bphv" Jan 28 10:52:56 crc kubenswrapper[4735]: I0128 10:52:56.524729 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-utilities\") pod \"redhat-marketplace-9bphv\" (UID: \"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5\") " pod="openshift-marketplace/redhat-marketplace-9bphv" Jan 28 10:52:56 crc kubenswrapper[4735]: I0128 10:52:56.525086 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-catalog-content\") pod \"redhat-marketplace-9bphv\" (UID: \"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5\") " pod="openshift-marketplace/redhat-marketplace-9bphv" Jan 28 10:52:56 crc kubenswrapper[4735]: I0128 10:52:56.552542 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvz6w\" (UniqueName: \"kubernetes.io/projected/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-kube-api-access-pvz6w\") pod \"redhat-marketplace-9bphv\" (UID: \"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5\") " pod="openshift-marketplace/redhat-marketplace-9bphv" Jan 28 10:52:56 crc kubenswrapper[4735]: I0128 10:52:56.680629 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bphv" Jan 28 10:52:57 crc kubenswrapper[4735]: I0128 10:52:57.189876 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bphv"] Jan 28 10:52:57 crc kubenswrapper[4735]: I0128 10:52:57.461988 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bphv" event={"ID":"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5","Type":"ContainerStarted","Data":"aac7c50bced79d9554c63380d05fbdde0140fce0296b1f8a50a118540e359d47"} Jan 28 10:52:58 crc kubenswrapper[4735]: I0128 10:52:58.471893 4735 generic.go:334] "Generic (PLEG): container finished" podID="e3133ba9-cd1f-401e-9e0f-e33c3d00aff5" containerID="19a0c45106ccb28fdaf442642ecf25899df36f8f6b1829d8cd6de7d3a66e2864" exitCode=0 Jan 28 10:52:58 crc kubenswrapper[4735]: I0128 10:52:58.471950 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bphv" event={"ID":"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5","Type":"ContainerDied","Data":"19a0c45106ccb28fdaf442642ecf25899df36f8f6b1829d8cd6de7d3a66e2864"} Jan 28 10:52:58 crc kubenswrapper[4735]: I0128 10:52:58.474217 4735 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 10:53:00 crc kubenswrapper[4735]: I0128 10:53:00.508637 4735 generic.go:334] "Generic (PLEG): container finished" podID="e3133ba9-cd1f-401e-9e0f-e33c3d00aff5" containerID="bed224b0b412833de1b360baf771de59ceece03635ca9a9fa27bd10f64e824dd" exitCode=0 Jan 28 10:53:00 crc kubenswrapper[4735]: I0128 10:53:00.508721 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bphv" event={"ID":"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5","Type":"ContainerDied","Data":"bed224b0b412833de1b360baf771de59ceece03635ca9a9fa27bd10f64e824dd"} Jan 28 10:53:01 crc kubenswrapper[4735]: I0128 10:53:01.519073 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bphv" event={"ID":"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5","Type":"ContainerStarted","Data":"3ab848e0cfedd7b4b1068622ab384ab342976af06ede7021a5341d1c1108ffea"} Jan 28 10:53:01 crc kubenswrapper[4735]: I0128 10:53:01.545057 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9bphv" podStartSLOduration=3.096036114 podStartE2EDuration="5.545034371s" podCreationTimestamp="2026-01-28 10:52:56 +0000 UTC" firstStartedPulling="2026-01-28 10:52:58.474025434 +0000 UTC m=+4231.623689807" lastFinishedPulling="2026-01-28 10:53:00.923023691 +0000 UTC m=+4234.072688064" observedRunningTime="2026-01-28 10:53:01.540258373 +0000 UTC m=+4234.689922766" watchObservedRunningTime="2026-01-28 10:53:01.545034371 +0000 UTC m=+4234.694698744" Jan 28 10:53:05 crc kubenswrapper[4735]: I0128 10:53:05.429292 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:53:05 crc kubenswrapper[4735]: I0128 10:53:05.429811 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:53:06 crc kubenswrapper[4735]: I0128 10:53:06.681527 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9bphv" Jan 28 10:53:06 crc kubenswrapper[4735]: I0128 10:53:06.681907 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9bphv" Jan 28 10:53:06 crc kubenswrapper[4735]: I0128 10:53:06.729882 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9bphv" Jan 28 10:53:07 crc kubenswrapper[4735]: I0128 10:53:07.631289 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9bphv" Jan 28 10:53:07 crc kubenswrapper[4735]: I0128 10:53:07.680610 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bphv"] Jan 28 10:53:09 crc kubenswrapper[4735]: I0128 10:53:09.583961 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9bphv" podUID="e3133ba9-cd1f-401e-9e0f-e33c3d00aff5" containerName="registry-server" containerID="cri-o://3ab848e0cfedd7b4b1068622ab384ab342976af06ede7021a5341d1c1108ffea" gracePeriod=2 Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.133238 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bphv" Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.308665 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-catalog-content\") pod \"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5\" (UID: \"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5\") " Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.308905 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-utilities\") pod \"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5\" (UID: \"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5\") " Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.308991 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvz6w\" (UniqueName: \"kubernetes.io/projected/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-kube-api-access-pvz6w\") pod \"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5\" (UID: \"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5\") " Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.309695 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-utilities" (OuterVolumeSpecName: "utilities") pod "e3133ba9-cd1f-401e-9e0f-e33c3d00aff5" (UID: "e3133ba9-cd1f-401e-9e0f-e33c3d00aff5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.319006 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-kube-api-access-pvz6w" (OuterVolumeSpecName: "kube-api-access-pvz6w") pod "e3133ba9-cd1f-401e-9e0f-e33c3d00aff5" (UID: "e3133ba9-cd1f-401e-9e0f-e33c3d00aff5"). InnerVolumeSpecName "kube-api-access-pvz6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.329940 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3133ba9-cd1f-401e-9e0f-e33c3d00aff5" (UID: "e3133ba9-cd1f-401e-9e0f-e33c3d00aff5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.410965 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.411001 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvz6w\" (UniqueName: \"kubernetes.io/projected/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-kube-api-access-pvz6w\") on node \"crc\" DevicePath \"\"" Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.411016 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.595030 4735 generic.go:334] "Generic (PLEG): container finished" podID="e3133ba9-cd1f-401e-9e0f-e33c3d00aff5" containerID="3ab848e0cfedd7b4b1068622ab384ab342976af06ede7021a5341d1c1108ffea" exitCode=0 Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.595276 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bphv" event={"ID":"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5","Type":"ContainerDied","Data":"3ab848e0cfedd7b4b1068622ab384ab342976af06ede7021a5341d1c1108ffea"} Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.595300 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bphv" event={"ID":"e3133ba9-cd1f-401e-9e0f-e33c3d00aff5","Type":"ContainerDied","Data":"aac7c50bced79d9554c63380d05fbdde0140fce0296b1f8a50a118540e359d47"} Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.595315 4735 scope.go:117] "RemoveContainer" containerID="3ab848e0cfedd7b4b1068622ab384ab342976af06ede7021a5341d1c1108ffea" Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.595424 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bphv" Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.635340 4735 scope.go:117] "RemoveContainer" containerID="bed224b0b412833de1b360baf771de59ceece03635ca9a9fa27bd10f64e824dd" Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.636137 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bphv"] Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.645927 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bphv"] Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.660379 4735 scope.go:117] "RemoveContainer" containerID="19a0c45106ccb28fdaf442642ecf25899df36f8f6b1829d8cd6de7d3a66e2864" Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.708479 4735 scope.go:117] "RemoveContainer" containerID="3ab848e0cfedd7b4b1068622ab384ab342976af06ede7021a5341d1c1108ffea" Jan 28 10:53:10 crc kubenswrapper[4735]: E0128 10:53:10.708953 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ab848e0cfedd7b4b1068622ab384ab342976af06ede7021a5341d1c1108ffea\": container with ID starting with 3ab848e0cfedd7b4b1068622ab384ab342976af06ede7021a5341d1c1108ffea not found: ID does not exist" containerID="3ab848e0cfedd7b4b1068622ab384ab342976af06ede7021a5341d1c1108ffea" Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.708989 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ab848e0cfedd7b4b1068622ab384ab342976af06ede7021a5341d1c1108ffea"} err="failed to get container status \"3ab848e0cfedd7b4b1068622ab384ab342976af06ede7021a5341d1c1108ffea\": rpc error: code = NotFound desc = could not find container \"3ab848e0cfedd7b4b1068622ab384ab342976af06ede7021a5341d1c1108ffea\": container with ID starting with 3ab848e0cfedd7b4b1068622ab384ab342976af06ede7021a5341d1c1108ffea not found: ID does not exist" Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.709013 4735 scope.go:117] "RemoveContainer" containerID="bed224b0b412833de1b360baf771de59ceece03635ca9a9fa27bd10f64e824dd" Jan 28 10:53:10 crc kubenswrapper[4735]: E0128 10:53:10.709277 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bed224b0b412833de1b360baf771de59ceece03635ca9a9fa27bd10f64e824dd\": container with ID starting with bed224b0b412833de1b360baf771de59ceece03635ca9a9fa27bd10f64e824dd not found: ID does not exist" containerID="bed224b0b412833de1b360baf771de59ceece03635ca9a9fa27bd10f64e824dd" Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.709302 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed224b0b412833de1b360baf771de59ceece03635ca9a9fa27bd10f64e824dd"} err="failed to get container status \"bed224b0b412833de1b360baf771de59ceece03635ca9a9fa27bd10f64e824dd\": rpc error: code = NotFound desc = could not find container \"bed224b0b412833de1b360baf771de59ceece03635ca9a9fa27bd10f64e824dd\": container with ID starting with bed224b0b412833de1b360baf771de59ceece03635ca9a9fa27bd10f64e824dd not found: ID does not exist" Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.709320 4735 scope.go:117] "RemoveContainer" containerID="19a0c45106ccb28fdaf442642ecf25899df36f8f6b1829d8cd6de7d3a66e2864" Jan 28 10:53:10 crc kubenswrapper[4735]: E0128 10:53:10.709578 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19a0c45106ccb28fdaf442642ecf25899df36f8f6b1829d8cd6de7d3a66e2864\": container with ID starting with 19a0c45106ccb28fdaf442642ecf25899df36f8f6b1829d8cd6de7d3a66e2864 not found: ID does not exist" containerID="19a0c45106ccb28fdaf442642ecf25899df36f8f6b1829d8cd6de7d3a66e2864" Jan 28 10:53:10 crc kubenswrapper[4735]: I0128 10:53:10.709606 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19a0c45106ccb28fdaf442642ecf25899df36f8f6b1829d8cd6de7d3a66e2864"} err="failed to get container status \"19a0c45106ccb28fdaf442642ecf25899df36f8f6b1829d8cd6de7d3a66e2864\": rpc error: code = NotFound desc = could not find container \"19a0c45106ccb28fdaf442642ecf25899df36f8f6b1829d8cd6de7d3a66e2864\": container with ID starting with 19a0c45106ccb28fdaf442642ecf25899df36f8f6b1829d8cd6de7d3a66e2864 not found: ID does not exist" Jan 28 10:53:11 crc kubenswrapper[4735]: I0128 10:53:11.512212 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3133ba9-cd1f-401e-9e0f-e33c3d00aff5" path="/var/lib/kubelet/pods/e3133ba9-cd1f-401e-9e0f-e33c3d00aff5/volumes" Jan 28 10:53:14 crc kubenswrapper[4735]: E0128 10:53:14.154154 4735 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.236:42154->38.102.83.236:46183: write tcp 38.102.83.236:42154->38.102.83.236:46183: write: connection reset by peer Jan 28 10:53:35 crc kubenswrapper[4735]: I0128 10:53:35.430127 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:53:35 crc kubenswrapper[4735]: I0128 10:53:35.430836 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:54:05 crc kubenswrapper[4735]: I0128 10:54:05.429493 4735 patch_prober.go:28] interesting pod/machine-config-daemon-l7kzt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 10:54:05 crc kubenswrapper[4735]: I0128 10:54:05.430067 4735 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 10:54:05 crc kubenswrapper[4735]: I0128 10:54:05.430111 4735 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" Jan 28 10:54:05 crc kubenswrapper[4735]: I0128 10:54:05.430828 4735 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4"} pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 10:54:05 crc kubenswrapper[4735]: I0128 10:54:05.430871 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerName="machine-config-daemon" containerID="cri-o://f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" gracePeriod=600 Jan 28 10:54:05 crc kubenswrapper[4735]: E0128 10:54:05.577705 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:54:06 crc kubenswrapper[4735]: I0128 10:54:06.155915 4735 generic.go:334] "Generic (PLEG): container finished" podID="2f975eb5-cf83-4145-8d73-879cc67863cf" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" exitCode=0 Jan 28 10:54:06 crc kubenswrapper[4735]: I0128 10:54:06.155965 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" event={"ID":"2f975eb5-cf83-4145-8d73-879cc67863cf","Type":"ContainerDied","Data":"f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4"} Jan 28 10:54:06 crc kubenswrapper[4735]: I0128 10:54:06.156008 4735 scope.go:117] "RemoveContainer" containerID="00942994fe73689ac14efdb488c14b528c781f0c4569d2a1e97e5dd87ba17b37" Jan 28 10:54:06 crc kubenswrapper[4735]: I0128 10:54:06.156647 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:54:06 crc kubenswrapper[4735]: E0128 10:54:06.157956 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:54:20 crc kubenswrapper[4735]: I0128 10:54:20.504180 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:54:20 crc kubenswrapper[4735]: E0128 10:54:20.505207 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:54:27 crc kubenswrapper[4735]: E0128 10:54:27.772955 4735 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115a5701_253b_423c_8989_baddf5df509d.slice/crio-d085f490fc70e4f3be85dcce31ae9bb2ef43ba9b7b2ed6b6d6b8341696c7fb67.scope\": RecentStats: unable to find data in memory cache]" Jan 28 10:54:28 crc kubenswrapper[4735]: I0128 10:54:28.390338 4735 generic.go:334] "Generic (PLEG): container finished" podID="115a5701-253b-423c-8989-baddf5df509d" containerID="d085f490fc70e4f3be85dcce31ae9bb2ef43ba9b7b2ed6b6d6b8341696c7fb67" exitCode=0 Jan 28 10:54:28 crc kubenswrapper[4735]: I0128 10:54:28.390954 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5mmtg/must-gather-2jqfn" event={"ID":"115a5701-253b-423c-8989-baddf5df509d","Type":"ContainerDied","Data":"d085f490fc70e4f3be85dcce31ae9bb2ef43ba9b7b2ed6b6d6b8341696c7fb67"} Jan 28 10:54:28 crc kubenswrapper[4735]: I0128 10:54:28.392640 4735 scope.go:117] "RemoveContainer" containerID="d085f490fc70e4f3be85dcce31ae9bb2ef43ba9b7b2ed6b6d6b8341696c7fb67" Jan 28 10:54:28 crc kubenswrapper[4735]: I0128 10:54:28.836931 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5mmtg_must-gather-2jqfn_115a5701-253b-423c-8989-baddf5df509d/gather/0.log" Jan 28 10:54:31 crc kubenswrapper[4735]: I0128 10:54:31.502903 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:54:31 crc kubenswrapper[4735]: E0128 10:54:31.504058 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:54:40 crc kubenswrapper[4735]: I0128 10:54:40.435705 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5mmtg/must-gather-2jqfn"] Jan 28 10:54:40 crc kubenswrapper[4735]: I0128 10:54:40.436608 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5mmtg/must-gather-2jqfn" podUID="115a5701-253b-423c-8989-baddf5df509d" containerName="copy" containerID="cri-o://ca4172f0a09f75f365d3693d5f2c0ffed44a8e067ec8a4f2fce3e844ca11e7b3" gracePeriod=2 Jan 28 10:54:40 crc kubenswrapper[4735]: I0128 10:54:40.460805 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5mmtg/must-gather-2jqfn"] Jan 28 10:54:40 crc kubenswrapper[4735]: I0128 10:54:40.597524 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5mmtg_must-gather-2jqfn_115a5701-253b-423c-8989-baddf5df509d/copy/0.log" Jan 28 10:54:40 crc kubenswrapper[4735]: I0128 10:54:40.598811 4735 generic.go:334] "Generic (PLEG): container finished" podID="115a5701-253b-423c-8989-baddf5df509d" containerID="ca4172f0a09f75f365d3693d5f2c0ffed44a8e067ec8a4f2fce3e844ca11e7b3" exitCode=143 Jan 28 10:54:40 crc kubenswrapper[4735]: I0128 10:54:40.968058 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5mmtg_must-gather-2jqfn_115a5701-253b-423c-8989-baddf5df509d/copy/0.log" Jan 28 10:54:40 crc kubenswrapper[4735]: I0128 10:54:40.968410 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5mmtg/must-gather-2jqfn" Jan 28 10:54:41 crc kubenswrapper[4735]: I0128 10:54:41.189052 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-558wj\" (UniqueName: \"kubernetes.io/projected/115a5701-253b-423c-8989-baddf5df509d-kube-api-access-558wj\") pod \"115a5701-253b-423c-8989-baddf5df509d\" (UID: \"115a5701-253b-423c-8989-baddf5df509d\") " Jan 28 10:54:41 crc kubenswrapper[4735]: I0128 10:54:41.189478 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/115a5701-253b-423c-8989-baddf5df509d-must-gather-output\") pod \"115a5701-253b-423c-8989-baddf5df509d\" (UID: \"115a5701-253b-423c-8989-baddf5df509d\") " Jan 28 10:54:41 crc kubenswrapper[4735]: I0128 10:54:41.202009 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115a5701-253b-423c-8989-baddf5df509d-kube-api-access-558wj" (OuterVolumeSpecName: "kube-api-access-558wj") pod "115a5701-253b-423c-8989-baddf5df509d" (UID: "115a5701-253b-423c-8989-baddf5df509d"). InnerVolumeSpecName "kube-api-access-558wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:54:41 crc kubenswrapper[4735]: I0128 10:54:41.292298 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-558wj\" (UniqueName: \"kubernetes.io/projected/115a5701-253b-423c-8989-baddf5df509d-kube-api-access-558wj\") on node \"crc\" DevicePath \"\"" Jan 28 10:54:41 crc kubenswrapper[4735]: I0128 10:54:41.328021 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/115a5701-253b-423c-8989-baddf5df509d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "115a5701-253b-423c-8989-baddf5df509d" (UID: "115a5701-253b-423c-8989-baddf5df509d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:54:41 crc kubenswrapper[4735]: I0128 10:54:41.394673 4735 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/115a5701-253b-423c-8989-baddf5df509d-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 10:54:41 crc kubenswrapper[4735]: I0128 10:54:41.520081 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115a5701-253b-423c-8989-baddf5df509d" path="/var/lib/kubelet/pods/115a5701-253b-423c-8989-baddf5df509d/volumes" Jan 28 10:54:41 crc kubenswrapper[4735]: I0128 10:54:41.608073 4735 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5mmtg_must-gather-2jqfn_115a5701-253b-423c-8989-baddf5df509d/copy/0.log" Jan 28 10:54:41 crc kubenswrapper[4735]: I0128 10:54:41.608567 4735 scope.go:117] "RemoveContainer" containerID="ca4172f0a09f75f365d3693d5f2c0ffed44a8e067ec8a4f2fce3e844ca11e7b3" Jan 28 10:54:41 crc kubenswrapper[4735]: I0128 10:54:41.608796 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5mmtg/must-gather-2jqfn" Jan 28 10:54:41 crc kubenswrapper[4735]: I0128 10:54:41.634890 4735 scope.go:117] "RemoveContainer" containerID="d085f490fc70e4f3be85dcce31ae9bb2ef43ba9b7b2ed6b6d6b8341696c7fb67" Jan 28 10:54:45 crc kubenswrapper[4735]: I0128 10:54:45.503524 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:54:45 crc kubenswrapper[4735]: E0128 10:54:45.504319 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:54:59 crc kubenswrapper[4735]: I0128 10:54:59.503436 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:54:59 crc kubenswrapper[4735]: E0128 10:54:59.504298 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:55:12 crc kubenswrapper[4735]: I0128 10:55:12.503360 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:55:12 crc kubenswrapper[4735]: E0128 10:55:12.504346 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:55:27 crc kubenswrapper[4735]: I0128 10:55:27.517132 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:55:27 crc kubenswrapper[4735]: E0128 10:55:27.518058 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:55:39 crc kubenswrapper[4735]: I0128 10:55:39.502398 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:55:39 crc kubenswrapper[4735]: E0128 10:55:39.503189 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:55:51 crc kubenswrapper[4735]: I0128 10:55:51.503029 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:55:51 crc kubenswrapper[4735]: E0128 10:55:51.504333 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:55:56 crc kubenswrapper[4735]: I0128 10:55:56.321288 4735 scope.go:117] "RemoveContainer" containerID="24ef0da7c8fa292ce395ce714ca6ff205a5aeb06a71cf54bbfe910dd14103ab7" Jan 28 10:56:04 crc kubenswrapper[4735]: I0128 10:56:04.503349 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:56:04 crc kubenswrapper[4735]: E0128 10:56:04.504140 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:56:17 crc kubenswrapper[4735]: I0128 10:56:17.503727 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:56:17 crc kubenswrapper[4735]: E0128 10:56:17.504575 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:56:32 crc kubenswrapper[4735]: I0128 10:56:32.503614 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:56:32 crc kubenswrapper[4735]: E0128 10:56:32.505013 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:56:45 crc kubenswrapper[4735]: I0128 10:56:45.504260 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:56:45 crc kubenswrapper[4735]: E0128 10:56:45.505365 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:56:58 crc kubenswrapper[4735]: I0128 10:56:58.502722 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:56:58 crc kubenswrapper[4735]: E0128 10:56:58.503572 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.247067 4735 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r4cck"] Jan 28 10:57:09 crc kubenswrapper[4735]: E0128 10:57:09.248019 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3133ba9-cd1f-401e-9e0f-e33c3d00aff5" containerName="extract-content" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.248032 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3133ba9-cd1f-401e-9e0f-e33c3d00aff5" containerName="extract-content" Jan 28 10:57:09 crc kubenswrapper[4735]: E0128 10:57:09.248058 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115a5701-253b-423c-8989-baddf5df509d" containerName="gather" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.248064 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="115a5701-253b-423c-8989-baddf5df509d" containerName="gather" Jan 28 10:57:09 crc kubenswrapper[4735]: E0128 10:57:09.248078 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115a5701-253b-423c-8989-baddf5df509d" containerName="copy" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.248086 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="115a5701-253b-423c-8989-baddf5df509d" containerName="copy" Jan 28 10:57:09 crc kubenswrapper[4735]: E0128 10:57:09.248097 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3133ba9-cd1f-401e-9e0f-e33c3d00aff5" containerName="extract-utilities" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.248104 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3133ba9-cd1f-401e-9e0f-e33c3d00aff5" containerName="extract-utilities" Jan 28 10:57:09 crc kubenswrapper[4735]: E0128 10:57:09.248122 4735 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3133ba9-cd1f-401e-9e0f-e33c3d00aff5" containerName="registry-server" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.248128 4735 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3133ba9-cd1f-401e-9e0f-e33c3d00aff5" containerName="registry-server" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.248290 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3133ba9-cd1f-401e-9e0f-e33c3d00aff5" containerName="registry-server" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.248304 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="115a5701-253b-423c-8989-baddf5df509d" containerName="copy" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.248316 4735 memory_manager.go:354] "RemoveStaleState removing state" podUID="115a5701-253b-423c-8989-baddf5df509d" containerName="gather" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.249519 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4cck" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.258596 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r4cck"] Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.300572 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9207e7-6d0a-4929-8271-2646e7fcdc70-catalog-content\") pod \"certified-operators-r4cck\" (UID: \"0c9207e7-6d0a-4929-8271-2646e7fcdc70\") " pod="openshift-marketplace/certified-operators-r4cck" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.300661 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9207e7-6d0a-4929-8271-2646e7fcdc70-utilities\") pod \"certified-operators-r4cck\" (UID: \"0c9207e7-6d0a-4929-8271-2646e7fcdc70\") " pod="openshift-marketplace/certified-operators-r4cck" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.300800 4735 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp7pr\" (UniqueName: \"kubernetes.io/projected/0c9207e7-6d0a-4929-8271-2646e7fcdc70-kube-api-access-dp7pr\") pod \"certified-operators-r4cck\" (UID: \"0c9207e7-6d0a-4929-8271-2646e7fcdc70\") " pod="openshift-marketplace/certified-operators-r4cck" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.403263 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp7pr\" (UniqueName: \"kubernetes.io/projected/0c9207e7-6d0a-4929-8271-2646e7fcdc70-kube-api-access-dp7pr\") pod \"certified-operators-r4cck\" (UID: \"0c9207e7-6d0a-4929-8271-2646e7fcdc70\") " pod="openshift-marketplace/certified-operators-r4cck" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.403564 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9207e7-6d0a-4929-8271-2646e7fcdc70-catalog-content\") pod \"certified-operators-r4cck\" (UID: \"0c9207e7-6d0a-4929-8271-2646e7fcdc70\") " pod="openshift-marketplace/certified-operators-r4cck" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.403736 4735 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9207e7-6d0a-4929-8271-2646e7fcdc70-utilities\") pod \"certified-operators-r4cck\" (UID: \"0c9207e7-6d0a-4929-8271-2646e7fcdc70\") " pod="openshift-marketplace/certified-operators-r4cck" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.404187 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9207e7-6d0a-4929-8271-2646e7fcdc70-catalog-content\") pod \"certified-operators-r4cck\" (UID: \"0c9207e7-6d0a-4929-8271-2646e7fcdc70\") " pod="openshift-marketplace/certified-operators-r4cck" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.404520 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9207e7-6d0a-4929-8271-2646e7fcdc70-utilities\") pod \"certified-operators-r4cck\" (UID: \"0c9207e7-6d0a-4929-8271-2646e7fcdc70\") " pod="openshift-marketplace/certified-operators-r4cck" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.662828 4735 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp7pr\" (UniqueName: \"kubernetes.io/projected/0c9207e7-6d0a-4929-8271-2646e7fcdc70-kube-api-access-dp7pr\") pod \"certified-operators-r4cck\" (UID: \"0c9207e7-6d0a-4929-8271-2646e7fcdc70\") " pod="openshift-marketplace/certified-operators-r4cck" Jan 28 10:57:09 crc kubenswrapper[4735]: I0128 10:57:09.879660 4735 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4cck" Jan 28 10:57:10 crc kubenswrapper[4735]: I0128 10:57:10.403084 4735 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r4cck"] Jan 28 10:57:10 crc kubenswrapper[4735]: I0128 10:57:10.502770 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:57:10 crc kubenswrapper[4735]: E0128 10:57:10.503195 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:57:11 crc kubenswrapper[4735]: I0128 10:57:11.151170 4735 generic.go:334] "Generic (PLEG): container finished" podID="0c9207e7-6d0a-4929-8271-2646e7fcdc70" containerID="32b0028bf573ea7dabdbc974efc9d05d097b5c048eb7c72ea9a3508dc3a5f981" exitCode=0 Jan 28 10:57:11 crc kubenswrapper[4735]: I0128 10:57:11.151480 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4cck" event={"ID":"0c9207e7-6d0a-4929-8271-2646e7fcdc70","Type":"ContainerDied","Data":"32b0028bf573ea7dabdbc974efc9d05d097b5c048eb7c72ea9a3508dc3a5f981"} Jan 28 10:57:11 crc kubenswrapper[4735]: I0128 10:57:11.151548 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4cck" event={"ID":"0c9207e7-6d0a-4929-8271-2646e7fcdc70","Type":"ContainerStarted","Data":"3900ec16f4a92b34990186992c948ad6b648a8810860a4fd2be1db498087f383"} Jan 28 10:57:13 crc kubenswrapper[4735]: I0128 10:57:13.172407 4735 generic.go:334] "Generic (PLEG): container finished" podID="0c9207e7-6d0a-4929-8271-2646e7fcdc70" containerID="4c8533d8eab8d2f1fd7e1565e85d127a018b1032a42c2c1aba1e7f223d7578f2" exitCode=0 Jan 28 10:57:13 crc kubenswrapper[4735]: I0128 10:57:13.172469 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4cck" event={"ID":"0c9207e7-6d0a-4929-8271-2646e7fcdc70","Type":"ContainerDied","Data":"4c8533d8eab8d2f1fd7e1565e85d127a018b1032a42c2c1aba1e7f223d7578f2"} Jan 28 10:57:14 crc kubenswrapper[4735]: I0128 10:57:14.215259 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4cck" event={"ID":"0c9207e7-6d0a-4929-8271-2646e7fcdc70","Type":"ContainerStarted","Data":"4139fb421c9c4cef3a5a7bc09f8c4443826ac49ed10e7723ec5a61f487d37032"} Jan 28 10:57:14 crc kubenswrapper[4735]: I0128 10:57:14.257734 4735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r4cck" podStartSLOduration=2.812525218 podStartE2EDuration="5.25770941s" podCreationTimestamp="2026-01-28 10:57:09 +0000 UTC" firstStartedPulling="2026-01-28 10:57:11.154144765 +0000 UTC m=+4484.303809138" lastFinishedPulling="2026-01-28 10:57:13.599328927 +0000 UTC m=+4486.748993330" observedRunningTime="2026-01-28 10:57:14.252196586 +0000 UTC m=+4487.401860959" watchObservedRunningTime="2026-01-28 10:57:14.25770941 +0000 UTC m=+4487.407373783" Jan 28 10:57:19 crc kubenswrapper[4735]: I0128 10:57:19.880843 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r4cck" Jan 28 10:57:19 crc kubenswrapper[4735]: I0128 10:57:19.881448 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r4cck" Jan 28 10:57:19 crc kubenswrapper[4735]: I0128 10:57:19.931616 4735 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r4cck" Jan 28 10:57:20 crc kubenswrapper[4735]: I0128 10:57:20.318901 4735 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r4cck" Jan 28 10:57:20 crc kubenswrapper[4735]: I0128 10:57:20.376693 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r4cck"] Jan 28 10:57:22 crc kubenswrapper[4735]: I0128 10:57:22.295893 4735 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r4cck" podUID="0c9207e7-6d0a-4929-8271-2646e7fcdc70" containerName="registry-server" containerID="cri-o://4139fb421c9c4cef3a5a7bc09f8c4443826ac49ed10e7723ec5a61f487d37032" gracePeriod=2 Jan 28 10:57:22 crc kubenswrapper[4735]: I0128 10:57:22.503578 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:57:22 crc kubenswrapper[4735]: E0128 10:57:22.504039 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:57:22 crc kubenswrapper[4735]: I0128 10:57:22.829709 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4cck" Jan 28 10:57:22 crc kubenswrapper[4735]: I0128 10:57:22.883782 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp7pr\" (UniqueName: \"kubernetes.io/projected/0c9207e7-6d0a-4929-8271-2646e7fcdc70-kube-api-access-dp7pr\") pod \"0c9207e7-6d0a-4929-8271-2646e7fcdc70\" (UID: \"0c9207e7-6d0a-4929-8271-2646e7fcdc70\") " Jan 28 10:57:22 crc kubenswrapper[4735]: I0128 10:57:22.884035 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9207e7-6d0a-4929-8271-2646e7fcdc70-utilities\") pod \"0c9207e7-6d0a-4929-8271-2646e7fcdc70\" (UID: \"0c9207e7-6d0a-4929-8271-2646e7fcdc70\") " Jan 28 10:57:22 crc kubenswrapper[4735]: I0128 10:57:22.884210 4735 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9207e7-6d0a-4929-8271-2646e7fcdc70-catalog-content\") pod \"0c9207e7-6d0a-4929-8271-2646e7fcdc70\" (UID: \"0c9207e7-6d0a-4929-8271-2646e7fcdc70\") " Jan 28 10:57:22 crc kubenswrapper[4735]: I0128 10:57:22.886159 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c9207e7-6d0a-4929-8271-2646e7fcdc70-utilities" (OuterVolumeSpecName: "utilities") pod "0c9207e7-6d0a-4929-8271-2646e7fcdc70" (UID: "0c9207e7-6d0a-4929-8271-2646e7fcdc70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:57:22 crc kubenswrapper[4735]: I0128 10:57:22.896661 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c9207e7-6d0a-4929-8271-2646e7fcdc70-kube-api-access-dp7pr" (OuterVolumeSpecName: "kube-api-access-dp7pr") pod "0c9207e7-6d0a-4929-8271-2646e7fcdc70" (UID: "0c9207e7-6d0a-4929-8271-2646e7fcdc70"). InnerVolumeSpecName "kube-api-access-dp7pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 10:57:22 crc kubenswrapper[4735]: I0128 10:57:22.943121 4735 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c9207e7-6d0a-4929-8271-2646e7fcdc70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c9207e7-6d0a-4929-8271-2646e7fcdc70" (UID: "0c9207e7-6d0a-4929-8271-2646e7fcdc70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 10:57:22 crc kubenswrapper[4735]: I0128 10:57:22.986662 4735 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp7pr\" (UniqueName: \"kubernetes.io/projected/0c9207e7-6d0a-4929-8271-2646e7fcdc70-kube-api-access-dp7pr\") on node \"crc\" DevicePath \"\"" Jan 28 10:57:22 crc kubenswrapper[4735]: I0128 10:57:22.986699 4735 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9207e7-6d0a-4929-8271-2646e7fcdc70-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 10:57:22 crc kubenswrapper[4735]: I0128 10:57:22.986713 4735 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9207e7-6d0a-4929-8271-2646e7fcdc70-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 10:57:23 crc kubenswrapper[4735]: I0128 10:57:23.306730 4735 generic.go:334] "Generic (PLEG): container finished" podID="0c9207e7-6d0a-4929-8271-2646e7fcdc70" containerID="4139fb421c9c4cef3a5a7bc09f8c4443826ac49ed10e7723ec5a61f487d37032" exitCode=0 Jan 28 10:57:23 crc kubenswrapper[4735]: I0128 10:57:23.306833 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4cck" event={"ID":"0c9207e7-6d0a-4929-8271-2646e7fcdc70","Type":"ContainerDied","Data":"4139fb421c9c4cef3a5a7bc09f8c4443826ac49ed10e7723ec5a61f487d37032"} Jan 28 10:57:23 crc kubenswrapper[4735]: I0128 10:57:23.306913 4735 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4cck" Jan 28 10:57:23 crc kubenswrapper[4735]: I0128 10:57:23.307160 4735 scope.go:117] "RemoveContainer" containerID="4139fb421c9c4cef3a5a7bc09f8c4443826ac49ed10e7723ec5a61f487d37032" Jan 28 10:57:23 crc kubenswrapper[4735]: I0128 10:57:23.307144 4735 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4cck" event={"ID":"0c9207e7-6d0a-4929-8271-2646e7fcdc70","Type":"ContainerDied","Data":"3900ec16f4a92b34990186992c948ad6b648a8810860a4fd2be1db498087f383"} Jan 28 10:57:23 crc kubenswrapper[4735]: I0128 10:57:23.337447 4735 scope.go:117] "RemoveContainer" containerID="4c8533d8eab8d2f1fd7e1565e85d127a018b1032a42c2c1aba1e7f223d7578f2" Jan 28 10:57:23 crc kubenswrapper[4735]: I0128 10:57:23.366298 4735 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r4cck"] Jan 28 10:57:23 crc kubenswrapper[4735]: I0128 10:57:23.381841 4735 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r4cck"] Jan 28 10:57:23 crc kubenswrapper[4735]: I0128 10:57:23.389813 4735 scope.go:117] "RemoveContainer" containerID="32b0028bf573ea7dabdbc974efc9d05d097b5c048eb7c72ea9a3508dc3a5f981" Jan 28 10:57:23 crc kubenswrapper[4735]: I0128 10:57:23.425740 4735 scope.go:117] "RemoveContainer" containerID="4139fb421c9c4cef3a5a7bc09f8c4443826ac49ed10e7723ec5a61f487d37032" Jan 28 10:57:23 crc kubenswrapper[4735]: E0128 10:57:23.426453 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4139fb421c9c4cef3a5a7bc09f8c4443826ac49ed10e7723ec5a61f487d37032\": container with ID starting with 4139fb421c9c4cef3a5a7bc09f8c4443826ac49ed10e7723ec5a61f487d37032 not found: ID does not exist" containerID="4139fb421c9c4cef3a5a7bc09f8c4443826ac49ed10e7723ec5a61f487d37032" Jan 28 10:57:23 crc kubenswrapper[4735]: I0128 10:57:23.426515 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4139fb421c9c4cef3a5a7bc09f8c4443826ac49ed10e7723ec5a61f487d37032"} err="failed to get container status \"4139fb421c9c4cef3a5a7bc09f8c4443826ac49ed10e7723ec5a61f487d37032\": rpc error: code = NotFound desc = could not find container \"4139fb421c9c4cef3a5a7bc09f8c4443826ac49ed10e7723ec5a61f487d37032\": container with ID starting with 4139fb421c9c4cef3a5a7bc09f8c4443826ac49ed10e7723ec5a61f487d37032 not found: ID does not exist" Jan 28 10:57:23 crc kubenswrapper[4735]: I0128 10:57:23.426545 4735 scope.go:117] "RemoveContainer" containerID="4c8533d8eab8d2f1fd7e1565e85d127a018b1032a42c2c1aba1e7f223d7578f2" Jan 28 10:57:23 crc kubenswrapper[4735]: E0128 10:57:23.426895 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c8533d8eab8d2f1fd7e1565e85d127a018b1032a42c2c1aba1e7f223d7578f2\": container with ID starting with 4c8533d8eab8d2f1fd7e1565e85d127a018b1032a42c2c1aba1e7f223d7578f2 not found: ID does not exist" containerID="4c8533d8eab8d2f1fd7e1565e85d127a018b1032a42c2c1aba1e7f223d7578f2" Jan 28 10:57:23 crc kubenswrapper[4735]: I0128 10:57:23.426922 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8533d8eab8d2f1fd7e1565e85d127a018b1032a42c2c1aba1e7f223d7578f2"} err="failed to get container status \"4c8533d8eab8d2f1fd7e1565e85d127a018b1032a42c2c1aba1e7f223d7578f2\": rpc error: code = NotFound desc = could not find container \"4c8533d8eab8d2f1fd7e1565e85d127a018b1032a42c2c1aba1e7f223d7578f2\": container with ID starting with 4c8533d8eab8d2f1fd7e1565e85d127a018b1032a42c2c1aba1e7f223d7578f2 not found: ID does not exist" Jan 28 10:57:23 crc kubenswrapper[4735]: I0128 10:57:23.426940 4735 scope.go:117] "RemoveContainer" containerID="32b0028bf573ea7dabdbc974efc9d05d097b5c048eb7c72ea9a3508dc3a5f981" Jan 28 10:57:23 crc kubenswrapper[4735]: E0128 10:57:23.427258 4735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32b0028bf573ea7dabdbc974efc9d05d097b5c048eb7c72ea9a3508dc3a5f981\": container with ID starting with 32b0028bf573ea7dabdbc974efc9d05d097b5c048eb7c72ea9a3508dc3a5f981 not found: ID does not exist" containerID="32b0028bf573ea7dabdbc974efc9d05d097b5c048eb7c72ea9a3508dc3a5f981" Jan 28 10:57:23 crc kubenswrapper[4735]: I0128 10:57:23.427302 4735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32b0028bf573ea7dabdbc974efc9d05d097b5c048eb7c72ea9a3508dc3a5f981"} err="failed to get container status \"32b0028bf573ea7dabdbc974efc9d05d097b5c048eb7c72ea9a3508dc3a5f981\": rpc error: code = NotFound desc = could not find container \"32b0028bf573ea7dabdbc974efc9d05d097b5c048eb7c72ea9a3508dc3a5f981\": container with ID starting with 32b0028bf573ea7dabdbc974efc9d05d097b5c048eb7c72ea9a3508dc3a5f981 not found: ID does not exist" Jan 28 10:57:23 crc kubenswrapper[4735]: I0128 10:57:23.519989 4735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c9207e7-6d0a-4929-8271-2646e7fcdc70" path="/var/lib/kubelet/pods/0c9207e7-6d0a-4929-8271-2646e7fcdc70/volumes" Jan 28 10:57:34 crc kubenswrapper[4735]: I0128 10:57:34.502190 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:57:34 crc kubenswrapper[4735]: E0128 10:57:34.502990 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:57:47 crc kubenswrapper[4735]: I0128 10:57:47.511064 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:57:47 crc kubenswrapper[4735]: E0128 10:57:47.512040 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:58:02 crc kubenswrapper[4735]: I0128 10:58:02.504521 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:58:02 crc kubenswrapper[4735]: E0128 10:58:02.505336 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:58:14 crc kubenswrapper[4735]: I0128 10:58:14.503216 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:58:14 crc kubenswrapper[4735]: E0128 10:58:14.504120 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:58:29 crc kubenswrapper[4735]: I0128 10:58:29.503416 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:58:29 crc kubenswrapper[4735]: E0128 10:58:29.504312 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:58:41 crc kubenswrapper[4735]: I0128 10:58:41.503293 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:58:41 crc kubenswrapper[4735]: E0128 10:58:41.504022 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf" Jan 28 10:58:55 crc kubenswrapper[4735]: I0128 10:58:55.502790 4735 scope.go:117] "RemoveContainer" containerID="f61bbb4397058b1c82b84915995f4c1d6f6b77edd433a5dfe87513ca4887e8f4" Jan 28 10:58:55 crc kubenswrapper[4735]: E0128 10:58:55.503662 4735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l7kzt_openshift-machine-config-operator(2f975eb5-cf83-4145-8d73-879cc67863cf)\"" pod="openshift-machine-config-operator/machine-config-daemon-l7kzt" podUID="2f975eb5-cf83-4145-8d73-879cc67863cf"